Network-Optimised Spiking Neural Network for Event-Driven Networking
Abstract
Spiking neural networks support event-driven computation suited to time-critical networking tasks such as anomaly detection, local routing control, and congestion management at the edge. However, classical units, including Hodgkin–Huxley, Izhikevich, and the Random Neural Network, map poorly to these needs. This work introduces a compact two-variable neuron, Network-Optimised Spiking (NOS), whose state corresponds to normalised queue occupancy and a recovery resource. The model uses a saturating nonlinearity to represent finite buffers, a service-rate leak, and graph-local inputs with delays and optional per-link gates. It supports two differentiable reset schemes for training and deployment. We derive conditions for existence and uniqueness of equilibrium, provide local stability tests from the Jacobian trace and determinant, and obtain a network threshold that scales with the Perron eigenvalue of the coupling matrix. The analysis yields an operational rule that links damping and offered load, shows how saturation enlarges the stable region, and explains finite-size smoothing of synchrony onsets. Stochastic arrivals follow a compound Poisson shot-noise model aligned with telemetry smoothing, which gives closed-form sensitivity and variance expressions. Against queueing baselines NOS matches the light-load M/M/1 mean by calibration while truncating deep tails under bursty input. In closed loop it produces decisive, low-jitter marking with short settling. In zero-shot forecasting without labels NOS is calibrated per node from arrival statistics and known , , and . Its bounded event-driven dynamics yield high AUROC and AUPRC, enabling timely detection of congestion onsets with few false positives. Under a train-calibrated residual protocol across chain, star, and scale-free topologies, NOS improves early-warning F1 and detection latency over MLP, RNN, GRU, and a simple temporal GNN. We provide practical guidance for data-driven initialisation and surrogate-gradient training with a homotopy on reset sharpness, together with explicit stability checks and topology-aware bounds suitable for resource-constrained deployments.
keywords
Spiking neural networks, Congestion control, Active queue management, Explicit Congestion Notification, Network stability, Spectral graph theory, Distributed control, Queueing theory
1 Introduction
Machine learning has reshaped core networking tasks, including short-horizon traffic forecasting, anomaly detection, and routing optimisation. Conventional deep models such as multilayer perceptrons, recurrent networks, and graph neural networks perform well when labels are plentiful and training is done offline. Spiking neural networks offer a complementary path. Their event-driven computation is naturally sparse in time and energy, which aligns with packet-level operation at the edge and in devices with tight power budgets. Recent surveys and hardware results report large efficiency gains for spiking workloads on neuromorphic substrates, motivating designs that couple spiking dynamics to network semantics [6, 8, 5]. Foundational work on temporal coding and computational power further motivates spike-based processing in control settings [21]. In parallel, hardware platforms such as Intel’s Loihi and its toolchain demonstrate on-chip learning and practical workflows for deploying SNNs [8, 23].
Classical spiking neural network (SNN) models, while powerful for neuroscientific exploration, are poorly matched to networking practice due to several fundamental limitations. Firstly, they often prioritize biological analogy, defining abstract internal states rather than variables with direct network interpretations like queue occupancy, service rate, or link delay [15] [28]. This biological focused modeling also leads to dynamics that present significant optimization challenges; the hard threshold-and-reset dynamics intrinsic to spiking neurons create a non-differentiable discontinuity that severely obstructs gradient-based optimisation, a cornerstone of modern machine learning [1][39]. Furthermore, the treatment of topology and per-link quality is often implicit or uniform, neglecting the complex, heterogeneous, and weighted graphs that define real-world networks like data centers or the Internet. From a dynamical systems perspective, the excitability in common SNN models formulation can be unbounded, leading to unrealistic behaviour under high input load and presenting significant numerical instability [38]. Finally, implementations face practical limits in training cost, input encoding, and deployment on resource-constrained neuromorphic or edge hardware. Consequently, evaluation remains limited, often failing to bridge the gap between abstract simulations and performance on real network topologies and traffic patterns [15] [37]. Early Poisson spiking models also show that spike-driven networks need not admit Hopfield-style energy functions, reinforcing the case for operational, domain-grounded formulations [7]. The Hodgkin–Huxley [16] equations are biophysically correct but heavy for embedded use. Izhikevich’s [18] formulation is efficient but its quadratic drive is unbounded and the reset is discontinuous, which complicates training and interpretation for queues. Random Neural Networks and G-networks [11] bridge neural and queueing views, yet their canonical forms do not provide the graph-aware, differentiable, and parameter-interpretable behaviour needed for packet dynamics on real topologies. These gaps motivate a compact spiking unit whose state maps to normalised queue occupancy and recovery, whose inputs are graph-local with delays and optional per-link gates, and whose reset is differentiable for gradient-based optimisation [39]. A second driver is deployment. Neuromorphic systems such as Intel’s Loihi families demonstrate on-chip learning rules, synaptic delays, and event-driven execution that can reduce energy for small edge workloads while keeping latency low [8]. Related SNN studies in robotics and generic complex-network settings illustrate broader interest in topology and control, but their objectives differ from packet dynamics and queue semantics considered here [40, 33, 22]. Such systems reinforce the case for spiking designs that expose measurable parameters and training handles, so that network operators can calibrate models from telemetry and run them under tight power envelopes.
1.1 Spiking Neural Networks for Networking: Scope and Complementarity
Spiking models describe a time-varying state and emit discrete events when a threshold is crossed. In the biological literature this is written in terms of a membrane potential ; for networking the same idea can be read as a state that advances under inputs and intrinsic dynamics until an event is produced. The event acts as the computational primitive. Intelligent networking has been led by deep learning, reinforcement learning, and graph neural networks trained on aggregates or labelled flows. These methods assume synchronous batches and centralised compute. Spiking neural networks use event-driven updates, which aligns with irregular packet arrivals, bursty congestion, and anomaly signatures that depart from baseline rather than average behaviour. The practical question is when this alignment matters enough to prefer an SNN, and when conventional ML or GNNs remain the better tool.
Problem domain | More suitable with SNNs | More suitable with ML/GNNs |
---|---|---|
Anomaly detection (e.g., DDoS, worm outbreaks) | Event-driven detection of sudden bursts; low-latency alarms [26, 4] | Flow-level detection from large corpora; supervised and deep models [2, 19] |
Routing optimisation | Local adaptation in ad hoc or sensor settings using local rules; energy- and event-efficient deployment on neuromorphic hardware [8, 9] | Global path optimisation and traffic engineering with topology-wide objectives; GNN/RL controllers [36] |
QoS prediction (throughput, delay) | Edge-side reaction to micro-bursts under power limits (event-driven inference) [8] | Predictive modelling from aggregates; offline/online regression and forecasting [14, 3, 35] |
Traffic classification | Temporal signatures (periodicity, jitter) via spike timing and event patterns [31, 30] | Feature-rich flow classification with labels using CNN/RNN/GNN [29, 41, 34] |
Wireless resource allocation | Local duty-cycling, sensing, and lightweight on-device decision-making under tight energy budgets [9] | Centralised spectrum allocation and large-scale optimisation with RL/GNN [25, 42, 17] |
Network resilience and fault detection | Distributed detection of local failures and sudden state changes in streams [26, 4] | Correlating failures across topology; resilience modelling and recovery for SDN/CPS [27, 32, 24] |
Event-driven versus batch-oriented learning.
Packet systems are inherently event-driven. Conventional pipelines buffer into fixed windows before inference, adding latency and energy. SNNs update on arrival, so a single event can trigger a decision without waiting for a batch. This is attractive for anomaly alerts and micro-burst monitoring where early reaction is valuable.
Energy and deployment constraints.
Edge devices and in-network elements often face tight power and memory budgets. Neuromorphic platforms (e.g., Loihi and SpiNNaker) report energy advantages for spiking workloads, which supports in situ decisions at low latency [8, 10]. In contrast, data-centre controllers and core routers, with ample power and global context, typically profit from conventional ML and GNNs.
Temporal coding and traffic structure.
Some networking tasks hinge on timing patterns rather than sheer volume: jitter in interactive media, periodic bursts in games, or hotspot prediction from inter-arrival times. SNNs encode such structure in spike timing and phase. Recurrent deep models can capture time as well, but temporal coding is native to spiking dynamics.
Local learning and autonomy.
Networks are distributed. Routers, base stations, and sensors often adapt with limited scope. SNNs admit local rules such as spike-timing-dependent plasticity (STDP), enabling on-node adaptation. Global optimisation over topology and multi-objective trade-offs remains a strength of GNNs and reinforcement learning.
Complementary roles.
SNNs and conventional AI are complementary. SNNs provide efficient, event-driven reflexes for local, time-critical decisions under tight budgets. Deep learning and GNNs act as global planners that integrate structure and statistics. A practical design places SNNs in edge nodes or programmable switches for reflexive response, coordinated with ML/GNN controllers for end-to-end policy.
1.2 Contributions and Findings
This article develops a Network-Optimised Spiking neuron (NOS) and examines its behaviour from single node to network scale. The model is compact and two-state, with each state and parameter tied to measurable network quantities. Queue occupancy serves as the fast state, and a slower recovery variable captures post-burst slowdown. Service appears as leak terms. Inputs are graph-local with options for per-link gating and fixed delay. A saturating nonlinearity prevents unbounded growth and represents finite buffers. Two differentiable reset strategies are introduced—an event-triggered exponential soft reset and a continuous pullback shaped by a sigmoid—both enabling gradient-based training while preserving spiking dynamics.
The analysis gives equilibrium existence and uniqueness, stability tests from the Jacobian, and a network threshold that scales with the Perron eigenvalue of the coupling matrix. Saturation enlarges the stable region by reducing the effective excitability slope. An operational margin links damping with offered load and provides a two-parameter contour for planning and capacity checks. Stochastic arrivals are represented as shot noise or compound Poisson input with calibrated scales. Practical guidance covers parameter initialisation from telemetry, surrogate-gradient training with a homotopy in reset sharpness, and deployment on neuromorphic platforms. The evaluation protocol uses label-free forecasting with residual-based event scoring, in line with established practice in time-series anomaly detection.
Findings show that bounded excitability yields unique single-node equilibria and wider stability regions than the unbounded quadratic drive. Network stability follows the Perron eigenvalue, giving a direct means of control through weight normalisation and topology choice. Differentiable resets remove optimisation difficulties caused by hard jumps and enable surrogate-gradient training without disturbing event timing. In residual-based forecasting, NOS acts as a physics-aware forecaster, achieving competitive accuracy and latency on chain, star, and scale-free graphs while remaining interpretable and suitable for neuromorphic execution.
2 Background: classical spiking paradigms and networking suitability
This section reviews three classical spiking neural networks and examines what they offer, and their suitability for networking tasks where queue semantics, topology, and trainability plays important role.
2.1 Random Neural Network (Gelenbe)
The Random Neural Network (RNN) treats each neuron as an integer-valued counter driven by excitatory and inhibitory Poisson streams and yields a product-form steady state [12]. Let denote the activity of neuron . In steady state
(1) | ||||
(2) |
and the joint distribution factorises as .
The queueing affinity and fixed points make calibration attractive. However, backbone and datacentre traces are overdispersed and often long-memory, so a stationary product form understates burst clustering and tail risk. This follows from the Poisson and exponential assumptions of the model, which wash out temporal correlation and refractoriness. Product-form results emphasise stationarity, whereas operations rely on transient detection and control. Parameters and do not tie directly to per-link capacity, delay, or gating. Under strong excitation, the signal-flow equations can fail to admit a valid solution with , leading to instability or saturation; when a solution exists it is unique [12, 13].
2.2 Hodgkin–Huxley
The Hodgkin–Huxley (HH) equations provide biophysical fidelity through voltage-gated conductances and reproduce spikes, refractoriness and ionic mechanisms [16]. The membrane and current relations are
(3) | ||||
(4) | ||||
(5) |
where is membrane potential; membrane capacitance. are sodium, potassium, and leak currents; is external or synaptic input. are maximal conductances and reversal potentials. are gating variables with voltage-dependent rates .
Hodgkin–Huxley offers mechanistic fidelity and a broad repertoire of excitable behaviour, and recent results show that HH neurons can be trained end to end with surrogate gradients, achieving competitive accuracy with very sparse spiking on standard neuromorphic benchmarks [20]. This confirms feasibility for learning and suggests potential efficiency from activity sparsity. For network packet-level control, the obstacles are practical rather than conceptual. The model comprises four coupled nonlinear differential equations with voltage-dependent rate laws, which typically require small timesteps and careful numerical treatment; in practice this raises computational cost and stiffness at scale, and training can fail without tailored integrators and schedules [20]. There is also no direct mapping from biophysical parameters to queue occupancy, service rate, or link delay, so calibration from network telemetry lacks interpretability. Taken together, HH remains a benchmark for cellular electrophysiology [16], but it is a poor fit for networking tasks that requires lightweight, semantically mapped states and trainable dynamics.
2.3 Izhikevich
The Izhikevich model balances speed and diversity of firing patterns [18]. Its dynamics and reset are
(6) | ||||
(7) |
where, is the membrane potential (mV). is a recovery variable with the same units as that aggregates activation and inactivation effects. is the input drive expressed as an equivalent voltage rate (mV ms-1). The spike condition is mV, after which and . is the inverse time constant of (ms-1); is the coupling gain from to (dimensionless); is the post-spike reset level of (mV); is the post-spike increment of (mV).
The Izhikevich model achieves breadth of firing patterns with two state variables and a hard after-spike reset (6)–(7). This efficiency is documented through large-scale pulse-coupled simulations and parameter recipes for cortical cell classes. For networking, three constraints matter. First, the quadratic drift in (6) is not intrinsically saturating, which is mismatched to finite-buffer behaviour and can drive unrealistically rapid growth between resets under heavy input. Second, the reset in (7) is discontinuous, which hinders gradient-based optimisation. Third, the parameters are phenomenological and the connectivity is generic, so queue occupancy, service rate, link delay, and per-link quality are not first-class. These features explain the model’s value for neuronal dynamics and its limited interpretability for packet-level control.
2.4 Synthesis
Across the three formalisms the evidence points to the same gaps. Either the model is correct but heavy, or it is fast but carries non-differentiable resets and unbounded excitability. In all cases the parameters do not align with queue occupancy, service, and delay. Topology and per-link quality are usually implicit rather than explicit. For packet networks this limits interpretability, stability under high load, and the ability to train or adapt in situ. These observations set the design brief for NOS: a compact two-state unit with finite-buffer saturation, a service-rate leak and a differentiable reset, graph-local inputs with delays and optional gates, and parameters that map to observable network quantities.
RNN (Gelenbe) | Hodgkin–Huxley | Izhikevich | |
---|---|---|---|
State/parameters map to queue, service, delay | Limited (probabilistic rates) | No | No |
Trainability for control (gradients) | Moderate at steady state | Low | Low (non-differentiable reset) |
Topology and per-link attributes explicit | Limited | No | No |
Finite-buffer realism / stability under load | Stationary focus | Realistic but heavy | Unbounded drive |
Edge-side compute/energy | Low–moderate | High | Low |
3 Proposed model: Network-Optimised Spiking neural network (NOS)
Networking workloads are shaped by graph structure, finite buffers, and service constraints, and they must be trainable from telemetry. We therefore begin with graph-local inputs that respect neighbourhoods and link heterogeneity,
(8) |
where is a presynaptic event train and encodes link capacity, policy weight, or reliability. Three implementation choices motivate the design. First, the reset must be continuous and differentiable so gradient-based training is possible. Second, state variables and parameters should map to observables such as normalised queue length, service rate, and recovery time. Third, subthreshold integration needs explicit stochastic drive so gradual load accumulation and burstiness are both represented. These choices lead to a compact two-variable unit for scalability and neuromorphic deployment, bounded excitability that respects finite buffers, graph-aware coupling with optional per-link gates and delays, leaky subthreshold dynamics, and data-calibrated parameters.
3.1 Variable reinterpretation
inspired from [18], we keep a two-variable unit but reinterpret its state for networking. The fast state represents a normalised queue or congestion level, with mapping empty to full buffer. The slow state represents a recovery or slowdown resource that builds during bursts and relaxes thereafter.
Here corresponds to a full buffer. The state summarises pacing, token replenishment, or rate-limiter cool-down that follows a burst. This choice allows direct initialisation from traces: from queue occupancy, from measured relaxation time, and the link between them from decay after micro-bursts. It also clarifies units. Both and evolve in the same time scale, which keeps linearisation and stability analysis transparent.
3.2 State variables and dynamics
For each node we adopt a bounded excitability function and explicit damping:
(9) | ||||
(10) |
Equation (9) separates four operational effects. The term (see (11)) captures the convex rise of backlog during rapid arrivals while remaining bounded. The linear part fits residual slope and offset that are visible in practice after removing coarse trends. The coupling models recovery drag that slows re-accumulation immediately after events. Finally, represents service and small-signal damping. Equation (10) integrates recent congestion with sensitivity and relaxes at rate . Thus, the NOS model reflects three intuitive aspects of queue behaviour: the rise under increasing load, the draining through service, and the short-lived slowdown that follows bursts. A complete schematic including graph-local gates, delays, stochastic threshold, and differentiable reset appears in Fig. 2 (§3.5–§3.7).
3.3 Bounded excitability
The quadratic term used in Izhikevich provides excitability but grows without bound. In a networking view, this implies unconstrained buffer growth at router. Thus, We replace it with a saturating nonlinearity
(11) |
which preserves a quadratic ramp for small load
(12) |
and enforces a finite ceiling for heavy load, ensuring bounded behaviour
(13) |
The derivative is globally bounded, which improves numerical stability and yields clean gain conditions, which means, it recovers a quadratic slope for small while enforcing saturation for large :
(14) |
is globally bounded, with maximum
(15) |
(15) is the steepest admissible local “growth” that service and damping must dominate to avoid runaway in bursts.
3.4 Equilibrium and boundedness
At steady state we eliminate using in (10) and obtains a scalar balance:
(16) |
where and collect the effective linear and constant drives implied by (9). The next result provides a sufficient, checkable criterion.
Lemma 1 (Existence and uniqueness).
Let be the admissible queue interval, for example . If
(17) |
then is strictly increasing on and has at most one root. If, in addition, there exist with , then a unique equilibrium exists. A sufficient global condition is
(18) |
Discussion. Condition (17) states that net drain dominates the steepest admissible excitability slope. The bound (18) follows from (15). It is conservative, easy to verify from telemetry, and stable against modest estimation error.
Corollary 1.1 (Algebraic condition and networking view).
For the coefficients in (9)–(10), a sufficient condition for a unique equilibrium in is
(19) |
or, equivalently,
(20) |
Operationally, the left side aggregates service and damping, while the right side is the worst-case excitability minus any linear relief . At full buffer, service must dominate arrivals; at empty buffer, leakage should not push the queue negative. Hence a single operating point exists within [0, 1].
Engineering rule of thumb.
Choose so that the knee of lies near the full-buffer scale, for example . Fit from small-signal curvature so that around typical loads. Then tune and to satisfy
(21) |
with a small safety margin to absorb burstiness. This keeps the operating point in the unique, stable region and reduces sensitivity to trace noise.
3.5 Graph-local inputs, explicit delays, and per-link queues
Topology enters through delayed presynaptic events (neighborhood wiring, per-link delays, and optional link-state gates that modulate influence during congestion, scheduling, or wireless fading) and optional per-link gates. Starting from (8), we incorporate delays and exogenous noise as
(22) |
where is the graph neighborhood of node , is the presynaptic event train emitted by , encodes a link’s nominal capacity, policy weight, or reliability, is is a link delay, and is stochastic drive. In practice, can be drawn from Round Trip Time (RTT) profiling after excluding queueing delays, while can be scaled so the spectral radius of the un-gated weight matrix is controlled at design time. Delays ensure that reacts only after information from can physically arrive, which is essential when bursts traverse multiple hops and when controllers must respect causality.When links maintain their own queues, we gate the coupling by a per-link occupancy :
(23) |
and replace by . This produces a two layer representation: node integrator plus per link capacity gating. Here maps presynaptic activity to arrivals on , represents service or RED/ECN-like bleed, and is a bounded, monotone gate that reduces influence when the link is loaded. This separates phenomena cleanly: node excitability remains interpretable in queue units, while link congestion only modulates the strength and timing of coupling.
Per–link queue gating (dimensionally explicit).
Let denote the normalised occupancy of link , with . Choose a single time constant and a bounded, dimensionless encoding of the presynaptic drive . We model the queue as a leaky accumulator of arrivals:
(24) |
so that and is dimensionless. The effective coupling is then
(25) |
with gates such as
(26) | ||||
(27) |
One then replaces in (22) by . This cleanly separates node adaptation from link capacity effects. In practice it lets an operator disclose where instability originates: node excitability, link queues, or topology. For completeness, the original form (23) is dimensionally consistent provided and ; (24) corresponds to the special case , , and .
Networking interpretation and design guidance.
Equation (22) enforces hop-level causality: bursts from affect only after , so back-pressure and wavefront effects appear on the correct timescale. The queue model in (23)–(24) captures how a busy uplink or radio fades coupling without requiring packet loss. The drain term can represent ECN or token-bucket service, while the gate reduces the effective link weight as occupancy rises. The power gate in (26) suits wired interfaces where capacity fades smoothly with load. The logistic gate in (27) matches thresholded policies such as priority drops, wireless duty cycling, or scheduler headroom rules. Because multiplies in (22), congestion localises: only the affected links attenuate, which keeps attribution simple in operations dashboards.
Calibration follows telemetry. Choose from the queue-free component of RTT. Set near the interface drain time or the QoS smoothing window. Normalise by nominal link rates or policy priorities, then scale so its spectral radius meets the desired small-signal index; under stress the gate reduces the effective spectral radius, enlarging the stable operating region. In evaluation, this decomposition helps explain behaviour: if instability appears with empty the cause is node excitability or topology; if it vanishes when engages, the bottleneck is link capacity or scheduling. See Appendix A for practical calibration notes and examples.
3.6 Stochastic arrivals: shot noise and compound Poisson
Network arrivals are bursty, short-correlated within flows, and heavy-tailed across flows. To expose this variability to the model while staying compatible with measurement practice, we represent exogenous drive at node by a compound Poisson shot-noise process smoothed at an operator-chosen time scale:
(28) |
where is a Poisson process of rate , the amplitudes reflect burst size, and is a causal exponential smoothing kernel such as,
(29) | ||||
(30) |
so enforces causality (no effect before the burst) and the factor sets an exponential decay with time scale (the burst’s influence fades smoothly). When needed, a normalised variant keeps unit area:
(31) |
The normalisation is set so that falls in a practical range for the workload. This construction aligns with the EWMA-style smoothing used in telemetry pipelines and permits direct calibration from packet counters.
Equation (28) mirrors the way operators pre-process packet counters and flow logs: arrivals are bucketed, lightly smoothed, and scaled. The rate matches the observed burst-start rate at node over the binning interval; the amplitude matches per-burst byte or packet mass after the same smoothing; and is set to the shortest window that suppresses counter jitter without hiding micro-bursts. The normalisation is chosen so that a single exogenous shot changes by a fraction of the threshold margin , with . The lower bound clears telemetry noise; the upper bound prevents a single burst from forcing a reset. In discrete time this corresponds to ; under the exponential kernel it corresponds to . Using (28) inside in (8) makes the event drive consistent with the graph structure and delays defined in (22). In effect, traffic statistics enter NOS with the same time base and smoothing that appear in the telemetry pipeline, which simplifies both training and post-deployment troubleshooting. “‘
3.7 Spike generation and resets
A spike indicates that the local congestion proxy exceeded tolerance and triggers a control action such as ECN marking, rate reduction, or an alert to the controller. To reflect tolerance bands and measurement noise we allow a stochastic threshold with bounded dispersion:
(32) |
where is zero-mean, bounded (e.g., clipped Gaussian or uniform), and is tuned to the false-alarm budget on residuals. This construction separates policy (the base threshold) from instrumentation noise (the dispersion).
Event-based exponential soft reset.
Right after a threshold crossing, the unit should return toward a baseline without a hard discontinuity. For a forward-Euler step of size we apply
(33) |
which makes relax exponentially toward with rate and increments by to encode refractory depth. In networking terms, (33) models a paced drain after a trigger together with a short-lived slow-down in effective send or admit rate. The pair is set from post-burst relaxation fits on and the desired “cool-down” depth in .



Continuous differentiable pullback.
Alternatively, we add to the -dynamics in (9) a smooth term that engages only above threshold:
(34) |
For the multiplier is near and the pullback is negligible. Once exceeds the threshold, rises toward and an exponential attraction to turns on as a continuous part of . Training proceeds with a homotopy in , starting smooth and sharpening as optimisation stabilises, which preserves gradient quality while converging to crisp event timing. For reference, the nullclines of the no–input skeleton are
(35) | ||||
(36) |
Figure 1 clarifies the role of the smooth pullback in a networking setting. In panel 1(a) the skeleton geometry is benign: the parabolic -nullcline (35) meets the linear -nullcline (36) at a stable point well to the left of the threshold. In panel 1(b), identical shot–noise drive nudges upward but, with the reset disabled, there is no restoring action triggered at the moment of crossing; the trajectory can linger near , which in practice sustains high send pressure on downstream links. In panel 1(c), the same bursts cause a single threshold crossing followed by an immediate, continuous return toward . The concurrent rise in encodes a short conservative phase, matching paced drain or token–bucket refill after an alarm. Because (34) is differentiable, these operator–visible effects are trainable: is tuned to device drain time or scheduler epoch, to the desired post–event baseline, and to the depth of temporary slow–down. The model therefore reproduces both the timing of decisions and the engineered cool–down that follows, without introducing discontinuities that harm gradient-based learning or blur attribution. Fig. 2 illustrates a single NOS unit, highlighting its graph-local inputs, excitability dynamics, recovery variable, stochastic threshold, and differentiable reset pathway.
Parameter | Observable(s) | Initialisation rule | Typical default |
---|---|---|---|
Queue length or buffer occupancy | Affine map full buffer , empty | — | |
False–alarm budget on residuals | Choose so on train; set threshold at | ||
Mean service rate | Per-bin rate: | per bin | |
Small-signal damping | Fit AR(1) on subthreshold : | – per bin | |
Slope of vs | Regress on on train residuals | – | |
Baseline load | Intercept from the same regression, or set mean residual | – | |
Nonlinear ramp steepness | Fit to rising edges | , | |
Post-burst relaxation time | (per bin) from exponential fit | per bin | |
Recovery sensitivity to | Tune so tracks during decay without overshoot | – | |
Passive decay of | Small regulariser on drift; set by validation | – per bin | |
Post-event baseline of | Median in a quiet window after events | ||
Recovery jump on event | Choose to match observed refractory depth | – | |
Desired sharpness of reset | Large enough to mimic hard reset without instability | ||
Reset time constant | (per bin) | per bin | |
, gain | Arrivals mapping | Regress on smoothed arrivals: | , gain |
Burst smoothing need | Low-pass if arrivals are spiky; choose cut-off by validation | – ms | |
Link rates/priorities | Proportional to nominal bandwidth or policy weight; normalise | — | |
Desired coupling index | Set ; pick for stress | topology-specific | |
Per-link delay | From RTT or profiling; use queue-free component | – ms |
Per-bin conversion uses ms. For per-second rates multiply by . Defaults are the values used unless stated; sensitivity to is reported in the Appendix F.
3.8 Non-dimensionalisation and scaling
Operational traces arrive with different bin widths, device rates, and buffer sizes. To make tuning portable across such heterogeneity, we rescale state and time by fixed references and , and work with
(37) |
Choose as the full-buffer level used in the queue normalisation (for example, bytes or packets mapped to ), and choose as the dominant local timescale: the scheduler epoch, the service drain time , or the telemetry bin width when that is the binding constraint. With (37), the left-hand side of (9) satisfies
(38) |
and each term on the right-hand side of (9) is re-expressed as a dimensionless group. The bounded excitability transforms as
(39) |
Linear and constant terms scale as
(40) |
The recovery dynamics (10) become
(41) |
and the graph-local input (8) and its delayed form (22) scale as
(42) |
For the shot-noise drive (28), amplitudes and smoothing follow
(43) |
With (37)–(43), raw telemetry from sites that sample at ms and sites that sample at ms are mapped to the same dimensionless dynamics. The same holds for devices with different buffers: picking from the local full-buffer level preserves the meaning of . Operators can therefore compare fitted parameters across topologies and hardware: reflects service relative to the chosen time base; reflects recovery rate in scheduler units; and capture the curvature and knee of the congestion ramp independent of absolute queue size. Delays and gates adopt the same rule: is RTT-free propagation and processing in units of , while stays a policy or bandwidth proportion that is already dimensionless.
Reading guide and forward links.
The local slope bound in (15) and the service–damping aggregate in (20) feed directly into the small-signal analysis in §5.2 and the network coupling results in §5.3. Let be the spectral radius of the coupling matrix and any global gain applied to . Linearisation of (9)–(10) about an operating point yields a block Jacobian whose dominant network term is proportional to . Compare this against the net drain
(44) |
which is the left side of (20) minus the right side at the operating point. A convenient stability proxy is
(45) |
Under the rescaling (37), both sides scale by the same factor: and are dimensionless, so (45) is invariant and can be checked in either set of units. The proxy exposes the same levers used in operations. One can reduce by reweighting or sparsifying couplings, reduce through gain scheduling, or raise by increasing service , damping , or the recovery rate (with the trade-off set by and ). It is useful to track the operational margin
(46) |
which we will use in §5.3 to explain bifurcation onsets, headroom under topology changes, and how weight normalisation can enforce a fixed margin across deployments.
4 Design principles and engineering guidance
The NOS configuration is straightforward: it follows a small set of choices that mirror device behaviour in operational networks. Figure 3 summarises how a NOS unit is used in practice: telemetry pre-processing aligns with the smoothing in (28)–(31); the state evolves by (9)–(10) with resets (33)/(34); events and residuals drive local detection and control; and spikes propagate over with delays (22) to form neighbours’ inputs. This sets the stage for the engineering choices that follow (reset time scale, EWMA leak , gain scheduling vs. , and threshold calibration).
For resets, one may use either an event–based exponential smoothing at threshold crossings or a continuous sigmoidal pullback that activates only above threshold (cf. §3.7); both are differentiable and avoid algebraic jumps, so gradient flow remains stable and attribution stays clear. In deployment terms, the return–to–baseline time is matched to the hardware drain or scheduler epoch, which means the model cools down on the same clock as the switch or NIC. Subthreshold prefiltering enters as a mild leak that behaves like an EWMA on residual queues: it reveals early warnings while damping tiny oscillations caused by counter jitter, yet preserves micro–bursts that matter for control.
Exogenous burstiness and measurement noise are included explicitly. We use shot–noise drive as in §3.6 and allow slow adaptation of tolerance and recovery so the unit becomes temporarily conservative during sustained stress. Concretely, the threshold may wander within a bounded band and the recovery rate may track recent activity
(47) |
where is bounded noise that sets the false–alarm budget and is the short–window spike (arrival) rate. These adaptations match operator practice such as temporary pacing, token–bucket refill, or ECN–driven headroom.
Training follows standard practice while keeping the networking semantics intact. We employ surrogate gradients at the threshold with a fast–sigmoid derivative
(48) |
which keeps gradients finite and centred. To approach crisp events without destabilising optimisation, we use a homotopy on the pullback sharpness that starts smooth and tightens once optimisation stabilises,
(49) |
which preserves gradient quality early and matches event timing late. We also clip gradients by global norm in the range while grows. An adaptive optimiser such as Adam is suitable; the learning rate is reduced as increases to maintain stable steps near the threshold. Truncated BPTT should cover the dominant recovery timescale so that ’s memory is learned rather than aliased,
(50) |
which aligns with the cool–down used by paced drain or token–bucket refill in devices. After supervised pre–training, slow local updates can be enabled to track diurnal load or policy shifts.
The network view stays explicit. Delays and gates ensure that path timing and residual capacity modulate influence at the right hop and time. The small–signal proxy in (45) makes the stability levers visible to operations: one can lower by reweighting or sparsifying couplings, reduce the global gain by scheduling, or increase the net drain by raising service , damping , or recovery (with and setting trade–offs). These are the same knobs used in practice to keep queues bounded and alerts meaningful, and they connect directly to the analysis that follows.
4.1 Neuromorphic deployment
NOS maps cleanly to event-driven hardware. Inbound packets become address–event representation spikes; the graph term is realised by on-chip routers that natively implement sparse fan-out with per-link delay lines. The state is stored in fixed point, and the soft resets in (33)–(34) are implemented as leaky updates or gated pullbacks without algebraic jumps. This preserves gradient surrogates for training and keeps timing aligned with scheduler epochs and drain times. For networking workloads, the benefits are direct: energy per decision scales with the spike rate rather than the link count; delays become programmable pipeline stages; and per-link gates mirror queue-aware policing or ECN. Practical quantisation rules, fabric-rate constraints, and an operational margin check under finite precision are detailed in Appendix B.
5 Equilibrium and local stability analysis
We analyse subthreshold operating points of a NOS node and then lift the analysis to the coupled network, treating resets as short transients that do not change the slow geometry. At equilibrium the node holds a steady queue level set by arrivals, service, and graph-local coupling. We first derive the equilibrium equations and conditions for existence and uniqueness. We then linearise to obtain the Jacobian and local stability criteria. Next we introduce network coupling, form the block Jacobian, and relate stability to the graph spectrum, which yields an explicit stability threshold and an operational margin. We classify the ensuing bifurcations and read them in operational terms (onset of ringing, loss of headroom, topology-driven tipping), and we quantify noise sensitivity under stochastic drive to explain false alarms and recovery delays. Finally, we compare the subthreshold steady state with the queueing baseline to anchor NOS against classical service-arrival models.
5.1 Equilibrium equations
For node , let denote a subthreshold equilibrium (). The (10) implies a linear relation between recovery and queue level:
(51) |
Substituting (51) into (9) gives
(52) |
where denotes the long-run mean of the graph-local drive at node (including delayed contributions). Eliminating yields a scalar fixed-point equation for :
(53) |
Equation (53) balances three effects: a convex rise with load through , linear drain through the net service and damping, and the constant baseline from . The mean input aggregates neighbour activity over the network with delays; under stationary traffic it is well defined because the delayed sum in (22) averages to a constant.
Quadratic small-signal approximation.
For analytic visibility, replace in a small-signal regime and define
(54) |
Then (53) reduces to
(55) |
with discriminant
(56) |
with discriminant . If and the admissible root lies in , it approximates and clarifies how the operating point moves with . In networks, service and damping must outpace the steepest small-signal gain of ; the corollary in 3.4 provided a direct algebraic check. In the quadratic approximation, the sign of in (56) separates three regimes: (no real root, parameter mismatch), (tangent balance at a single ), and (two mathematical roots, of which the physically admissible one lies in the queue range and satisfies the stability test below).
Existence and uniqueness on .
The monotonicity criterion introduced earlier applies with as in (54): a sufficiently positive net slope over the admissible interval (for example ) ensures a unique solution. For the saturating choice
(57) |
the slope
(58) |
is nonnegative and bounded on , with . If
(59) |
then is strictly increasing on and has at most one root there. If in addition , a unique exists. Condition (59) is met whenever
(60) |
which is the small-signal service-dominance requirement: linear damping must exceed the maximal small-signal excitability slope.
Networking interpretation.
Equation (53) balances offered load against service and damping. The term aggregates graph-local arrivals and policy weights; collects all linear drains. The sensitivity to slowly varying load follows by implicit differentiation:
(61) |
Hence the DC gain is stable and interpretable. It shrinks as saturation increases or as service and damping increase , which matches the operational goal of keeping the queue operating point insensitive to modest load drift.
5.2 Jacobian and linear stability
Linearising NOS node (9)–(10) at a subthreshold equilibrium gives the single-node Jacobian
(62) |
with trace and determinant
(63) | |||
(64) |
The Routh–Hurwitz conditions for a two-dimensional system yield local asymptotic stability if and only if
(65) |
Networking reading.
The trace condition (65) asks that net drain exceeds the small-signal growth from . The determinant condition requires that the product of recovery gain and time-scale dominates the same growth term. Together they state that service plus damping and the recovery loop must remove perturbations faster than excitability amplifies them. When either inequality tightens, small oscillations and slow return-to-baseline emerge, which operators observe as ringing in queue telemetry.
Specialisations.
Quadratic small-signal. For small , use and test (65) with that slope.
Bounded excitability. For ,
(66) | |||
(67) | |||
(68) |
As grows, the effective slope (66) decreases, enlarging the stability region defined by (65).
Lemma 2 (Monotone stabilisation by saturation).
Let with , , and let be a subthreshold equilibrium. Then
(69) |
with equality only at . Consequently,
(70) |
so increasing strictly decreases the trace and strictly increases the determinant, enlarging the region .
Proof sketch.
Corollary 2.1 (Network threshold increases with saturation).
Under homogeneous coupling with Perron eigenvalue , define
(72) |
Since for , one has and therefore on the active branch of . Increasing raises the coupling needed to destabilise the network, which matches the bounded-excitability design goal.
In networking terms, making the excitability saturate earlier reduces small-signal gain and raises damping, which protects the node against oscillations when operated near capacity.
Operational guidance.
When tuning for a given site, estimate from the modal subthreshold queue level and compute the small-signal slope using the fitted . If (65) is tight, increasing (earlier saturation) or increasing and adds headroom. If tightening occurs only in , adjust the recovery loop to increase or the composite time-scale ; both choices shorten the memory of recent stress and prevent ringing after bursts. Decay times and DC sensitivity that guide window length and reset tuning are summarised in Appendix C, Eqs. (98) and (99)
5.3 Network coupling and global stability
In isolation a NOS unit settles to a load–service balance set by its local parameters. In a network the steady input is shaped by neighbours and policy weights, so existence and robustness of equilibria depend on both the operating point and the coupling matrix. Let be the graph coupling and let denote the effective steady presynaptic drive at node (after any per–link delays and gates). The steady input at node is
(73) |
With , and the spectral radius obeys , so gives a conservative topology-aware cap on steady drive. From the scalar equilibrium reduction (cf. (51)–(56)), define
(74) |
and write the per-node equilibrium condition as
(75) |
Existence bounds.
Two complementary sufficient conditions are useful in operations.
Saturated cap (conservative). Since for all ,
(76) |
This bound uses only the saturation ceiling and is independent of the local operating point.
Quadratic small-signal. Approximating near light load gives
(77) |
so an equilibrium exists if
(78) |
We refer to (78) as the quadratic small-signal bound. Combining (78) with the norm bound in (73) yields
(79) |
with a spectral alternative obtained by replacing with . Inequality (79) makes the graph explicit: heavier row sums (dense fan–in or large policy weights) shrink headroom for the same steady .
Operational margin.
A convenient scalar indicator is
(80) |
which is positive when the quadratic small-signal existence test passes with slack. See Appendix G for the conservative one–dimensional sweeps of the operational margin versus , a worked numerical example, and the linear dependence on load illustrated in Fig. 15. In practice, keeping under nominal load provides headroom for burstiness and guides weight normalisation: scale so that the largest row sum keeps below the right-hand side of (79). When delays are present, (79) remains a conservative existence test; sharper delay-aware guarantees require the block Jacobian in the next subsection.

Operational margin under load and damping.
Figure 4 plots as a function of the subthreshold damping and the maximum steady input . The red contour separates admissible operating points from those that violate the quadratic small-signal bound. From a networking viewpoint, measures the strength of subthreshold queue relaxation, while is the heaviest sustained offered load after graph coupling. Two effects are visible: (i) the margin decreases monotonically with , so higher offered load erodes headroom; (ii) larger shifts the boundary outward, permitting higher sustained load before crossing . Stronger damping therefore improves resilience but does not remove the inherent load–margin trade-off.
Networking view. aggregates worst–case inbound influence at a node, so it reflects heavy in–degree or imbalanced policies; captures coherent multi–hop reinforcement. Both provide simple levers: reweight heavy rows, sparsify fan–in, or gate selected edges until is positive with a chosen margin.
5.4 Block Jacobian, network spectrum, and stability threshold
We assess global small–signal stability by linearising the coupled NOS network about a subthreshold equilibrium . Let and be perturbations of state, and let small variations of presynaptic drive satisfy , where is the input–output sensitivity induced by topology (for homogeneous scaling, ). Linearisation of (9)–(10) then yields the block Jacobian
(81) |
Instability occurs when any eigenvalue of crosses into the open right half-plane.
Perron-mode reduction under homogeneous coupling.
For homogeneous coupling with and let be the spectral radius with Perron eigenpair . Projecting (81) onto yields an effective Jacobian
(82) |
In the Perron–mode reduction we define , which is exact for heterogeneous equilibria. Throughout the subsequent analysis we write for homogeneous or mildly heterogeneous whenever across nodes (or the Perron vector is sufficiently delocalised). All threshold formulas (e.g. (83) and (85)) remain valid with either form by substituting the appropriate for the deployment at hand. The Routh–Hurwitz conditions for give the critical scalar coupling
(83) |
Using (cf. (44)), the threshold condition can be written as
(84) |
which matches the stability proxy introduced in the scaling discussion. While the Perron-mode reduction yields tight thresholds in practice, conservative certificates can be obtained via Gershgorin discs and norm bounds; see Appendix H for the resulting inequalities and their comparison to measured in Table 8.


Networking interpretation and control.
Figure 5 shows that the first loss of stability is governed by the Perron mode of . As coupling increases, the leading eigenvalue crosses the imaginary axis at , marking the onset of coherent queue growth along the dominant influence path. The collapse of onto confirms the separation of concerns in (83): captures how strongly topology reinforces load; depends only on node physics through . Operationally, the levers are direct. Reduce by reweighting heavy fan–in or pruning selected edges. Reduce through gain scheduling on stressed regions or policy throttles that weaken sensitivity. Raise the drain margin by increasing service , adding subthreshold damping , or shortening recovery via (with setting the trade). Because decreases as saturation strengthens (larger ), bounded excitability expands the stable range in (84).
Finite-size and heterogeneity effects.
The rescaled thresholds are not perfectly flat. They sit slightly above across , with larger deviations in smaller or more heterogeneous graphs. Two mechanisms explain this percent-level bias. First, nearby non-Perron modes and row–sum variability shift the leading root when the Perron vector localises on hubs, a common feature of heavy-tailed topologies. Second, discrete sweeps of and mild non-normal amplification nudge the numerical crossing. These are finite-size corrections, not a change of scaling, and they diminish with larger or more homogeneous weights. For conservative guarantees during early deployment, replace by norm bounds or apply Gershgorin discs to to obtain stricter, topology-aware thresholds. For deployments requiring analytic guarantees, Appendix H derives Gershgorin and -based bounds and reports their conservatism relative to measured (Table 8).
5.5 Bifurcation classification, operational reading, and finite-size effects
Loss of stability as the mean input increases (or as effective coupling grows) is produced by two local mechanisms. They can be diagnosed either from the scalar equilibrium map (fold tests) or from the Jacobian at the operating point (Routh–Hurwitz tests).
-
1.
Saddle–node (SN). The stable and unstable equilibria coalesce and disappear when the discriminant in (56) satisfies . Beyond this point there is no steady operating level: drifts upward until resets and leakage dominate. In network terms, offered load or coherent reinforcement pushes a node beyond its admissible queue set; the model predicts sustained congestion with intermittent resets rather than recovery to a fixed level.
-
2.
Hopf. The Jacobian trace crosses zero while the determinant remains positive (, ), creating a small oscillatory mode. In network terms, queues enter a self-excited cycle whose phase can entrain neighbors through , producing rolling congestion waves that propagate along high-gain paths.
Parameter trends and admissibility.
Continuation of equilibria versus reproduces queueing intuition and clarifies which onset is available. Increasing the service or the subthreshold damping shifts both SN and Hopf onsets to larger (more headroom); increasing excitability shifts them to smaller (less headroom). The recovery coupling controls whether a Hopf branch is admissible: the linear feedback must exceed the decay for an oscillatory onset to exist; otherwise only an SN occurs. Figure 6 shows under sweeps of , , and ; marker coordinates are listed in Table 4. Near-coincident markers correspond to the codimension-two neighborhood , where a small change in recovery or damping switches the dominant failure mode from smooth tipping (SN) to oscillatory bursts (Hopf).
Empirical collapse. Across graphs, the measured threshold obeys
(85) |
with small upward deviations when the Perron vector localises or row-sums vary; cf. Fig. 5.
Networking interpretation.
The two onsets have distinct operational signatures. An SN route produces a clean loss of a steady queue level and sustained saturation. Telemetry shows a monotone climb in with resets, increased delay variance, and persistent ECN/marking without a clear period. A Hopf route produces narrowband oscillations: queues and drop/mark rates cycle with a characteristic period at onset, and neighboring nodes oscillate in phase along the Perron mode of . These patterns matter for mitigation. To extend stable throughput, raise or , or reduce through stronger saturation (larger ). To avoid self-sustained burstiness, keep comfortably larger than by shortening recovery time (), reducing recovery sensitivity () when needed, or adding passive decay (). Each lever maps to standard controls: scheduler and line-rate settings (), AQM/leak policies (), congestion ramp shaping (), and backoff or pacing aggressiveness ().
Finite-size effects and coherence.
In finite graphs the transition is slightly rounded but sharpens with size. The measured coupling threshold aligns with the Perron-mode prediction (cf. (83)), with small upward deviations when the Perron vector localises on hubs or when secondary modes lie close in the spectrum. These are percent-level corrections that diminish in larger or more homogeneous topologies. Practically, one can monitor the rescaled margin : values near zero predict imminent entrainment of queues and the appearance of coherent oscillations along the dominant influence path. Appendix I quantifies network coherence via an order parameter , confirming that the onset clusters near and sharpens with (Fig. 16, Table 9).
Corollary 2.2 (Spectral collapse of the coupling threshold).

Panel | |||||||
---|---|---|---|---|---|---|---|
lambda sweep | 0.0 | 1.0 | 2.0 | 1.198 | 1.095 | 1.111 | 0.800 |
lambda sweep | 0.3 | 1.0 | 2.0 | 1.549 | 1.245 | 1.462 | 0.950 |
lambda sweep | 0.6 | 1.0 | 2.0 | 1.945 | 1.395 | 1.858 | 1.100 |
alpha sweep | 0.3 | 0.6 | 2.0 | 2.582 | 2.074 | 2.437 | 1.583 |
alpha sweep | 0.3 | 1.0 | 2.0 | 1.549 | 1.245 | 1.462 | 0.950 |
alpha sweep | 0.3 | 1.4 | 2.0 | 1.106 | 0.889 | 1.044 | 0.679 |
b sweep | 0.3 | 1.0 | 0.6 | 0.404 | 0.636 | NaN | NaN |
b sweep | 0.3 | 1.0 | 1.0 | 0.656 | 0.810 | NaN | NaN |
b sweep | 0.3 | 1.0 | 1.6 | 1.146 | 1.071 | 1.132 | 0.950 |
5.6 Noise sensitivity under stochastic drive
We model arrivals as shot noise with exponential smoothing. For node ,
(86) |
where is the event rate, are i.i.d. amplitudes, is a smoothing time, and is the Heaviside step. The process is stationary with mean, variance, and autocovariance
(87) |
Its power spectrum is Lorentzian,
(88) |
In networking terms, are read from the same counters and windows used in telemetry: is the burst-start rate, is the mean per-burst mass after smoothing, and matches the prefilter that removes counter jitter while preserving micro-bursts.
Small-signal sensitivity.
Linearising NOS at a subthreshold equilibrium with gives
(89) |
The DC gain and the variance of under (88) are
(90) |
As the node approaches its local margin () or the network approaches the Perron-mode threshold (), increases and the integral in (90) grows, so the same shot-noise trace produces larger queue excursions and more threshold crossings.
Firing statistics and cascades.
We summarise sensitivity by the mean firing rate , the inter-spike-interval coefficient of variation (CV), and avalanche sizes (contiguous above-threshold activity aggregated over neighbours). With parameters fixed and only the noise amplitude varied, three robust effects appear, consistent with (89)–(90) and bounded excitability: (i) rises with amplitude in all regimes, with a steeper slope near the coupling threshold because small increments in map to larger ; (ii) CV increases near threshold, reflecting irregular, burst-dominated spike trains as variance grows; (iii) the tail of becomes heavier with amplitude, indicating extended congestion cascades when fluctuations push multiple nodes above threshold before recovery completes.
Networking interpretation.
Higher or larger means burstier ingress; increasing models longer burst coherence. The levers that reduce noise sensitivity are the same that enlarge deterministic headroom: higher service and subthreshold damping (, ), faster recovery (), and earlier saturation (larger , which lowers and hence ). Each reduces and the variance integral in (90). With per-link gates, attenuating on noisy edges reduces the effective drive entering and moves the network away from the Perron threshold.
Empirical summary (Fig. 7).
The rate curves show versus amplitude for subcritical () and near-threshold () regimes; the steeper slope near matches the growth of . The CV curves increase with and peak near threshold, consistent with (90). The avalanche distributions broaden as increases; heavier tails near reflect larger correlated excursions. These trends are stable across Hz and the reported range, and they align with the linear response in (89). Further quantitative tail fits are provided in Appendix E.



5.7 Queueing baseline comparison
Experimental setup.
We compare NOS against canonical queueing baselines in two modes. (i) Open–loop: we drive M/M/1 (analytic mean, ), simulated M/M/1/, and a single NOS unit with the same exogenous arrivals and report observables in the same unit (packets). Light–load calibration aligns means: we choose an input gain and output scale so that the NOS mean at small matches the M/M/1 mean . Tail behaviour is then tested under a common MMPP burst sequence (same ON/OFF epochs), which is the operational regime of interest. (ii) Closed–loop: we attach three controllers to the same offered–load trace and compare the resulting queue trajectories and marking signals : A) NOS–based (marking derived from NOS subthreshold state with soft reset), B) queue controller (marking from the raw queue), and C) LP–queue controller (marking from a lightly low–pass filtered queue). All three use the same logistic nonlinearity for so differences are due to the state fed to the nonlinearity.


How to read the open–loop panels (networking view).
Figure 8 (left) shows mean occupancy versus arrival rate with a fixed service . The blue curve (M/M/1) is the textbook reference that diverges as . The orange curve (M/M/1/) follows M/M/1 until blocking becomes material. The green curve (NOS) rises with load at small (by calibration) but bends earlier as grows; bounded excitability and continuous pull–back reduce the subthreshold slope and drain excursions. Operationally, this mirrors how modern AQMs aim to keep average queues low near saturation rather than letting them diverge smoothly.
Figure 8 (right) plots the complementary CDFs of occupancy under the same MMPP burst sequence. M/M/1/ develops a long tail during ON periods (large probability of deep queues), whereas NOS exhibits a much sharper tail because soft reset and leak truncate cascades. In networking terms, NOS converts the same burst train into fewer deep–queue events—directly improving tail latency and reducing drop/mark storms. The aggressiveness of this truncation is tunable through the subthreshold leak , the reset rate , and the saturation knee ; operators can place the tail against SLO targets (e.g., ).
How to read the closed–loop panels (networking view).
Figure 9 contrasts three marking strategies under the same offered–load traces. The top panel shows controller outputs . NOS (blue) is decisive and low–jitter; the queue controller (orange) is noisy (mark jitter follows queue noise); the LP–queue controller (green) is smoother but lags, so it reacts late to bursts. The middle panel shows a step in offered load: NOS snaps to target with minimal ringing; the queue controller oscillates; the LP–queue controller overshoots and then settles slowly. The bottom panel shows a burst train: NOS produces short, clipped spikes; the queue controller shows noisy peaks; the LP–queue controller permits taller spikes because filtering delays the reaction. Practically, NOS shrinks the time spent at high queue levels and reduces burst amplification without a long averaging window, which improves tail latency and fairness while avoiding prolonged ECN saturation.
Summary for operators.
The baselines anchor NOS against established theory. When the objective is to match the classical infinite–buffer mean at light load, M/M/1 gives the right reference and NOS can be calibrated to agree. When the objective is to manage burst risk and stability under finite resources and coupling, NOS’ bounded excitability and differentiable pull–back are strictly more faithful: they reduce deep–queue probability, yield crisper ECN/marking, and shorten congestion episodes without heavy filtering. The same knobs that appear in our stability proxies—, –like drain, , and reset parameters—map directly to deployed policy controls (AQM leak, scheduler epoch, ramp shaping, and post–alarm conservatism), making the model both predictive and actionable. Having established mechanistic fidelity and control behaviour, we next ask how NOS fares as a label-free forecaster against compact ML/SNN baselines.



5.8 Zero-shot forecasting baselines (no labels)
Setup (like-for-like, no training).
All methods receive the same sliding window of arrival counts (last bins per node) and must predict the next queue sample without supervised training. We include: (Phys.-Fluid) the label-free fluid update ; (MovingAvg) a smoothed fluid variant; (TGNN-smooth) a single-hop temporal–graph smoother applied to arrivals before the fluid step; (LIF-leaky) a leaky-integrator forecaster; and (NOS (calibr.)) a single NOS unit per node with arrival-only calibration (no queue labels): a global input gain and per-node output scales are fitted to match light-load analytic means from M/M/1. In the run summarized here we obtained and the first three per-node scales ; the corresponding analytic light-load means for those nodes were (reported for reference only).
What the figure shows.
Figure 10 overlays the true queue on node 0 (black) with the five zero-shot forecasts. Phys.-Fluid (blue) tracks the envelope of bursts because it integrates the same arrivals that drive the queueing simulator; MovingAvg (orange) lags and underestimates peaks; TGNN-smooth (green) damps spikes via spatial smoothing; LIF-leaky (red) behaves as a low-pass filter; NOS (purple) fires promptly at burst onsets and then resets, producing compact predictions around excursions.

Numerical summary.
Table 5 reports point error (MAE) and event skill (AUROC / AUPRC for top-10% bursts), computed on the same held-out segment. Phys.-Fluid attains the smallest MAE ()—expected, since the simulator itself is fluid-like at the bin scale. NOS has a larger MAE () because it is bounded and self-resets (it does not accumulate backlog), yet it delivers strong event skill (AUROC , AUPRC ), on par with Phys.-Fluid and substantially above simple smoothers. Operationally, this means NOS is well-tuned to detect and time congestion onsets without drifting into long false positives between bursts. TGNN-smooth and LIF-leaky reduce noise but lose timing contrast (near-chance event skill).
Method | MAE | AUROC | AUPRC |
---|---|---|---|
Physics Fluid | 0.6748 | 0.834 | 0.555 |
Moving Avg | 0.8782 | 0.552 | 0.194 |
TGNN-smooth | 0.9251 | 0.507 | 0.128 |
LIF-leaky | 0.8891 | 0.500 | 0.126 |
NOS (calibr.) | 0.9278 | 0.894 | 0.536 |
Networking interpretation.
In the absence of labels, a fluid predictor provides a tough MAE baseline because it integrates exactly the arrivals that create the queue, essentially reproducing backlog drift. However, operators often care more about timely onset detection than about matching every backlog micro-fluctuation. Here NOS’ bounded, event-driven dynamics produce high AUROC/AUPRC—accurate timing of burst onsets and compact responses—while avoiding prolonged marking during recoveries. This complements the open/closed-loop results: use fluid logic for gross throughput accounting, and deploy NOS to deliver low-latency, low-ringing congestion signals aligned with control actions.
6 Network-level stability and topology dependence
To simulate the dynamics of the proposed NOS model, we implemented an event–driven loop that integrates each node’s excitability, recovery variable, and incoming delayed spikes. Algorithm 1 summarises the procedure. The update consists of three stages per timestep: (i) delivery of delayed spikes, (ii) membrane and recovery updates with bounded excitability and leak, and (iii) threshold check with a soft exponential reset. This compact routine captures both local spiking dynamics and graph–level propagation through weighted and delayed edges. The full pseudocode, including optional stochastic arrivals and avalanche counting, is provided in Appendix D.
Topologies and experiments
We tested NOS on large networks of size with different canonical topologies:
Chain topology.
Nodes are arranged in a linear sequence, representing serial bottlenecks such as routers on a backbone link. Propagation is slowed by distance, requiring stronger coupling for instability.
Star topology.
One hub connects to all leaves, representing a single aggregation point such as a central switch. This configuration is fragile, as overload of the hub destabilises the entire structure.
Scale-free topology.
Networks generated by the Barabási–Albert model capture heterogeneous connectivity with hubs and peripheries. This structure shows intermediate behaviour: hubs can initiate cascades, but leaves constrain their spread.
Figure 11 summarises the network-level phenomena captured by NOS. Panel (a) shows that when link delays are included (), the stability boundary shifts: oscillations emerge at lower coupling , reflecting how communication latency destabilises otherwise stable traffic. Panel (b) compares absolute resilience across topologies: chains require stronger coupling to destabilise, stars are fragile because of their single hub, and scale-free networks lie in between. Panel (b′′) reframes this relative to the Perron spectral prediction. Here chains deviate strongly, sitting well above the baseline, while stars and scale-free graphs align more closely. This means that spectral tools capture hub-dominated topologies reliably but underestimate the resilience of homogeneous chains.
The emergence of coherent timing can also be tracked with a Kuramoto-type order parameter; Appendix I shows versus (Fig. 16) and the finite-size sharpening of the transition around the Perron prediction . Together these results demonstrate that NOS bridges dynamical-systems formalism and networking dynamics. It shows where simple spectral theory suffices (hub-dominated topologies) and where explicit simulation is required (homogeneous or delayed structures). For networking readers, the panels illustrate directly how latency, topology, and surge recovery shape resilience, providing an interpretable analogue to oscillation and congestion dynamics in real networks.



6.1 Predictive baseline: comparison with machine learning models
Many operational networks lack reliable labels, so short-horizon forecasting with residual-based event inference is the standard route. We follow that practice. We compare NOS with tGNN, GRU, RNN, and MLP under three canonical graphs: scale-free, star, and chain. For metrics we use networks of size ; the range plots show node 0 for visual clarity. All baselines are trained label-free on next-step forecasting with a contiguous train/validation/test split. Inputs are standardised per node using statistics fitted on the train split only. Residuals are signed and -scored with train-only moments; event starts are the first samples whose residual -scores cross a fixed per-node threshold selected on the validation split but computed using train calibration. Episodes have a minimum duration to suppress single-sample blips; matching uses a fixed window around each ground-truth start. Start-latency is the sample difference between the truth start and the first model start within the window, converted to milliseconds using the sampling period . Forecasting error (MAE, RMSE) is computed on the original scale. The tGNN uses temporal message passing constrained to the given adjacency; capacities and early-stopping are held comparable across baselines.
Figure 12 presents the test-range series with start markers for each method. A marker at the truth dot indicates a correct start, a marker to the right indicates a late start, and a marker to the left indicates an early start. Extra markers away from a truth dot are false positives, while missing markers in the vicinity of a truth dot are false negatives. Read across the three panels from scale-free to star to chain and the pattern is consistent. On the scale-free graph, bursts arrive in clusters and ramps are steep. NOS remains close to the truth throughout these clusters and avoids small oscillations. The tGNN calls tend to appear inside the ramps, which shows up as slight right shifts. RNN produces occasional early calls on moderate slopes, and GRU sits between RNN and tGNN. MLP is the least selective and places extra markers on low-amplitude wiggles. Moving to the star graph, the hub concentrates load while spokes are quieter. This helps tGNN, yet NOS remains more selective at the spokes and keeps the timing tight when the hub rises. On the chain graph, onsets are clearer and repeat through the range. NOS lands on the main rises with few stray calls in the troughs, while tGNN shows a mild lag, RNN is a little early on shallow slopes, and GRU again lies between them. The visual story across the three topologies is that NOS aligns closely without over-firing when the background is quiet.
Figure 13 gathers the aggregate scores. The bars report F1, precision, and recall under the same train-calibrated residual protocol. The curves report one-step error as MAE and RMSE and the median start-latency in milliseconds. Higher bars indicate better detection quality. Lower curves indicate better forecasting accuracy and earlier response. Across all three topologies NOS attains the highest F1 by holding strong precision while lifting recall. It also yields the lowest MAE and RMSE and the smallest latency. RNN keeps good precision but loses recall and responds later. GRU and tGNN are intermediate and more sensitive to the graph structure. MLP has higher errors and longer delays. The metric panels match what the range plots suggested: NOS is both earlier and more selective, and its forecasts give cleaner residuals for thresholding.
These differences have direct operational meaning. High precision with weak recall misses incidents that matter. High recall with weak precision floods operators with alarms. Lower forecast error stabilises residual scales and avoids threshold drift. Lower latency preserves time for pacing, admission, and rerouting. Taken together, the figures show that NOS gives an effective operating point across scale-free, star, and chain graphs. It preserves recall without inflating false positives and it warns early enough to act when load begins to rise.
All learned baselines, namely MLP, RNN, GRU, and tGNN, are trained self-supervised on next-step forecasting, and events are inferred from forecast error rather than labels. Train-only calibration prevents leakage and keeps thresholds comparable across models and topologies. The evaluation can be extended with extreme-value thresholds on residual tails, rolling-origin splits to test drift, precision–recall curves, episode-wise latency in milliseconds, and an out-of-distribution topology block, without changing the core protocol.






7 Conclusion
This work we introduced NOS, a network-optimised spiking model tailored to packet networks with finite buffers, bursty arrivals, multi-hop coupling, and stringent latency requirements. In NOS, a bounded-excitability state (normalised queue), a recovery state , and differentiable resets (event-based exponential or continuous pullback) provide an event-driven representation that is compatible with telemetry smoothing and per-link delays/gates. Analytically, existence and uniqueness of subthreshold equilibria were established, and local stability was characterised via the single-node Jacobian. Linearisation of the coupled network yielded a block Jacobian and an operational stability proxy which separates topology from node physics and predicts that the first loss of stability scales inversely with the spectral radius . The analysis distinguishes saddle–node and Hopf onsets and shows that earlier saturation (larger ) enlarges headroom. Under shot-noise drive, small-signal transfer functions provide DC gain and variance expressions that account for the observed rise in rate variability and cascade size near local and network margins. Empirically, with like-for-like inputs, NOS matches the light-load mean after a simple gain/scale calibration while truncating deep-queue tails near saturation due to bounded excitability and pullback. In closed loop, NOS-state controllers achieve low-jitter actuation with fast settling and clipped responses to bursts. In label-free forecasting, NOS provides high event skill and low start-latency across chain, star, and scale-free graphs, complementing fluid predictors that minimise mean error but react later to onsets. Network-level simulations confirm the Perron-mode prediction for hub-dominated topologies and highlight increased resilience relative to in homogeneous chains; delay dispersion shifts stability thresholds as expected. In summary, NOS offers a compact, analytically tractable spiking abstraction of network congestion that unifies bounded excitability, soft resets, recovery dynamics, and graph-local coupling with delays. The analysis links node physics to topology through a Perron-mode threshold, explaining when coherent congestion onsets arise and how noise amplifies near margins. Practically, the model yields direct design rules: calibrate gains to light-load statistics; schedule the global coupling against the spectral radius ; raise the drain margin via service and damping ; shape excitability via ; and reweight or sparsify to control . Operationally, NOS-state feedback provides earlier, lower-jitter actuation than raw or low-pass queues, while retaining fluid logic for aggregate throughput accounting. These properties make NOS suitable for deployment in burst-dominated networks and amenable to neuromorphic or in-switch realisations, with stability margins that remain interpretable under moderate heterogeneity and delay dispersion.
The Perron-mode proxy and fixed-gain calibration are most reliable under moderate heterogeneity, moderate delay dispersion, and piecewise-stationary traffic; outside this regime, stability thresholds and gains should be confirmed by conservative spectral bounds or full network simulation, and recovery parameters should be identified with regularised (e.g., Bayesian) estimation to mitigate noise. Future work will develop adaptive gain scheduling under bandwidth and quantisation constraints, co-design topology and weights to enforce a fixed operational margin, and prototype neuromorphic or in-switch realisations that exploit event-driven fabrics for communication-efficient learning and control in large-scale networks.
References
- [1] L. F. Abbott, B. DePasquale, and R. M. Memmesheimer. Building functional networks of spiking model neurons. Nature Neuroscience, 19:350–355, 2016.
- [2] Mohammed Ahmed, Abdulkadir N. Mahmood, and Jiankun Hu. A survey of network anomaly detection techniques. Journal of Network and Computer Applications, 60:19–31, 2016.
- [3] Niveditha Baganal-Krishna, Ronny Hansel, Ivan Lecuyer, Romain Fontugne, Frédéric Giroire, and Dario Rossi. A federated learning approach to QoS forecasting in operational 5G networks. Computer Networks, 237:110010, 2024.
- [4] Dennis Bäßler, Tobias Kortus, and Gabriele Gühring. Unsupervised anomaly detection in multivariate time series with online evolving spiking neural networks. Machine Learning, 111(4):1377–1408, 2022.
- [5] Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. A high-throughput neural network accelerator. IEEE Micro, 35(3):24–32, 2015.
- [6] S. S. Chowdhury, D. Sharma, A. Kosta, et al. Neuromorphic computing for robotic vision: algorithms to hardware advances. Communications Engineering, 4:152, 2025.
- [7] M.C. Crair and W. Bialek. Non-boltzmann dynamics in networks of spiking neurons. In [Proceedings] 1991 IEEE International Joint Conference on Neural Networks, pages 2508–2514 vol.3, 1991.
- [8] M. Davies et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1):82–99, 2018.
- [9] Steve B. Furber, Francesco Galluppi, Steve Temple, and Luis A. Plana. The spinnaker project. Proceedings of the IEEE, 102(5):652–665, 2014.
- [10] Steve B. Furber, Francesco Galluppi, Steve Temple, and Luis A. Plana. The spinnaker project. Proceedings of the IEEE, 102(5):652–665, 2014.
- [11] E. Gelenbe. G-networks: a unifying model for neural and queueing networks. Annals of Operations Research, 48:433–461, 1994.
- [12] Erol Gelenbe. Random neural networks with negative and positive signals and product form solution. Neural Computation, 1(4):502–510, 1989.
- [13] Erol Gelenbe. Stability of the random neural network model. Neural Computation, 2(2):239–247, 1990.
- [14] Anna Giannakou, Dipankar Dwivedi, and Sean Peisert. A machine learning approach for packet loss prediction in science flows. Future Generation Computer Systems, 102:190–197, 2020.
- [15] Lei Guo, Dongzhao Liu, Youxi Wu, and Guizhi Xu. Comparison of spiking neural networks with different topologies based on anti-disturbance ability under external noise. Neurocomputing, 529:113–127, 2023.
- [16] M. Häusser. The Hodgkin-Huxley theory of the action potential. Nature Neuroscience, 3(Suppl 11):1165, 2000.
- [17] Adeel Iqbal, Ali Nauman, Riaz Hussain, and Muhammad Bilal. Cognitive d2d communication: A comprehensive survey, research challenges, and future directions. Internet of Things, 24:100961, 2023.
- [18] E. M. Izhikevich. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6):1569–1572, 2003.
- [19] Jielin Jiang, Jiale Zhu, Muhammad Bilal, Yan Cui, Neeraj Kumar, Ruihan Dou, Feng Su, and Xiaolong Xu. Masked swin transformer unet for industrial anomaly detection. IEEE Transactions on Industrial Informatics, 19(2):2200–2209, 2023.
- [20] Jann Krausse, Daniel Scholz, Felix Kreutz, Pascal Gerhards, Klaus Knobloch, and Jürgen Becker. Extreme sparsity in hodgkin-huxley spiking neural networks. In 2023 International Conference on Neuromorphic Computing (ICNC), pages 85–94, 2023.
- [21] N. Langlois, P. Miche, and A. Bensrhair. Analogue circuits of a learning spiking neuron model. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, volume 4, pages 485–489 vol.4, 2000.
- [22] Honghui Liang, Zhiguang Chen, Yangle Zeng, Guangnan Feng, and Yutong Lu. DFMG: Delay-Flush Multi-Group Algorithm for Spiking Neural Network Simulation, page 479–485. Association for Computing Machinery, New York, NY, USA, 2025.
- [23] Chit-Kwan Lin, Andreas Wild, Gautham N. Chinya, Yongqiang Cao, Mike Davies, Daniel M. Lavery, and Hong Wang. Programming spiking neural networks on intel’s loihi. Computer, 51(3):52–61, 2018.
- [24] Guoqiang Liu, Guanming Bao, Muhammad Bilal, Angel Jones, Zhipeng Jing, and Xiaolong Xu. Edge data caching with consumer-centric service prediction in resilient industry 5.0. IEEE Transactions on Consumer Electronics, 70(1):1482–1492, 2024.
- [25] Nguyen Cong Luong, Dinh Thai Hoang, Shimin Gong, Dusit Niyato, Ping Wang, Ying-Chang Liang, and Dong In Kim. Applications of deep reinforcement learning in communications and networking: A survey. IEEE Communications Surveys & Tutorials, 21(4):3133–3174, 2019.
- [26] Piotr S. Maciąg, Marzena Kryszkiewicz, Robert Bembenik, Jesús L. Lobo, and Javier Del Ser. Unsupervised anomaly detection in stream data with online evolving spiking neural networks. Neural Networks, 139:118–139, 2021.
- [27] Ahlem Menaceur, Hamza Drid, and Mohamed Rahouti. Fault tolerance and failure recovery techniques in software-defined networking: A comprehensive approach. Journal of Network and Systems Management, 31(83), 2023.
- [28] Emre O. Neftci, Hesham Mostafa, and Friedemann Zenke. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6):51–63, 2019.
- [29] Thuy T. T. Nguyen and Grenville Armitage. A survey of techniques for internet traffic classification using machine learning. IEEE Communications Surveys & Tutorials, 10(4):56–76, 2008.
- [30] Jonathan Platkiewicz, Eran Stark, and Asohan Amarasingham. Spike-centered jitter can mistake temporal structure. Neural Computation, 29(3):783–803, 2017.
- [31] Ali Rasteh, Florian Delpech, Carlos Aguilar-Melchor, Romain Zimmer, Saeed Bagheri Shouraki, and Timothée Masquelier. Encrypted internet traffic classification using a supervised spiking neural network. Neurocomput., 503(C):272–282, September 2022.
- [32] Mariana Segovia-Ferreira, Jose Rubio-Hernan, Ana Rosa Cavalli, and Joaquin Garcia-Alfaro. A survey on cyber-resilience approaches for cyber-physical systems. ACM Computing Surveys, 56(8):1–37, 2024.
- [33] Laith A. Shamieh, Wei-Chun Wang, Shida Zhang, Rakshith Saligram, Amol D. Gaidhane, Yu Cao, Arijit Raychowdhury, Suman Datta, and Saibal Mukhopadhyay. Cryogenic operation of computing-in-memory based spiking neural network. In Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design, ISLPED ’24, page 1–6, New York, NY, USA, 2024. Association for Computing Machinery.
- [34] Muhammad Basit Umair, Zeshan Iqbal, Muhammad Bilal, Jamel Nebhen, Tarik Adnan Almohamad, and Raja Majid Mehmood. An efficient internet traffic classification system using deep learning for iot. Computers, Materials & Continua, 71(1):407–422, 2022.
- [35] Xiaolong Xu, Chenyi Yang, Muhammad Bilal, Weimin Li, and Huihui Wang. Computation offloading for energy and delay trade-offs with traffic flow prediction in edge computing-enabled iov. IEEE Transactions on Intelligent Transportation Systems, 24(12):15613–15623, 2023.
- [36] Zhenyu Xu, Wenxin Li, Aurojit Panda, and Minlan Yu. Teal: Learning-accelerated optimization of WAN traffic engineering. In Proceedings of the ACM SIGCOMM 2023 Conference. ACM, 2023.
- [37] Xu Yang, Yunlin Lei, Mengxing Wang, Jian Cai, Miao Wang, Ziyi Huan, and Xialv Lin. Evaluation of the effect of the dynamic behavior and topology co-learning of neurons and synapses on the small-sample learning ability of spiking neural network. Brain Sciences, 12(2), 2022.
- [38] Ming Yao, Oliver Richter, Guangyu Zhao, Ning Qiao, Yannan Xing, Dingheng Wang, Tianxiang Hu, Wei Fang, Tugba Demirci, Michele De Marchi, Lei Deng, Tianyi Yan, Carsten Nielsen, Sadique Sheik, Chenxi Wu, Yonghong Tian, Bo Xu, and Guoqi Li. Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip. Nature Communications, 15:4464, 2024.
- [39] Friedemann Zenke and Wulfram Gerstner. Limits to high-speed simulations of spiking neural networks using general-purpose computers. Frontiers in Neuroinformatics, 8:76, 2014.
- [40] Jilun Zhang and Ying Liu. Brain-like path planning algorithm based on spiking neural network. In Proceedings of the 2025 6th International Conference on Computer Information and Big Data Applications, CIBDA ’25, page 1379–1386, New York, NY, USA, 2025. Association for Computing Machinery.
- [41] Jingjing Zhao, Xuyang Jing, Zheng Yan, and Witold Pedrycz. Network traffic classification for data fusion: A survey. Information Fusion, 72:22–47, 2021.
- [42] Yuan Zhi, Jie Tian, Xiaofang Deng, Jingping Qiao, and Dianjie Lu. Deep reinforcement learning-based resource allocation for d2d communications in heterogeneous cellular networks. Digital Communications and Networks, 8(5):834–842, 2022.
Appendix A Delayed events and per-link gates in practice
Timing.
The shift represents propagation and processing delay on . It aligns presynaptic events with the time packets reach node . Delay changes phase in closed loops, which affects stability margins and the onset of oscillations.
Conditioning.
The gate encodes a per-link condition such as residual capacity, scheduler priority, or queue occupancy. As grows, the effective weight shrinks, so a congested link contributes less drive downstream. This mimics policing, AQM, or rate limiting and prevents the model from reinforcing overloaded paths.
Examples.
-
•
Datacentre incast. Multiple senders spike toward a top-of-rack switch . Per-link queues rise, falls, and shrinks, so reflects effective rather than nominal bandwidth.
-
•
WAN hop with high delay. A large shifts by tens of milliseconds. The delayed drive prevents premature reactions at and yields a stability test that depends on both and the phase introduced by .
-
•
Wireless fading. Treat short-term SNR loss as a temporary rise in or as a change in the gate parameters. The same then has smaller impact at , consistent with link adaptation that lowers the offered load.
Operator workflow.
Use passive RTT sampling or active probes to estimate . Fit to the decay of when input drops or to the effective averaging window of the QoS policy. Set proportional to nominal capacity or administrative weight, then scale to the target small-signal index. Validate by replaying traces with known congestion episodes and checking that attenuates the expected links while leaving others unchanged.
Appendix B Neuromorphic implementation details
Fixed-point state and quantisation.
Let be stored in fixed point with full-buffer scaling and time base from (3.8). The forward map and clipping are
(91) |
With this scaling, remains interpretable as normalised queue level; overflow is prevented by the saturation in .
Discrete-time soft reset.
On digital substrates the exponential reset (33) is applied per tick only when a spike occurs. Let denote the state after the continuous update but before any reset. Then
(92) | ||||
(93) |
where indicates a threshold crossing at time . The factor is realised by a multiply–accumulate or by a bit-shift if is drawn from a small table.
Logistic pullback via LUT or PWL.
Graph coupling, delays, and AER routing.
Sparse coupling is realised by address–event routing tables; per-link delays are integer tick buffers
(95) |
which preserves the causality of (22). Gates are evaluated where the link queue state is maintained; their outputs modulate the synaptic multiplier before accumulation.
Fabric-rate and budget constraints.
Let be the sustainable spike throughput per core and let be the observed rate of . A conservative admission constraint that avoids router overload is
(96) |
or, in matrix form for a deployment-wide check, , where counts nonzeros per row and .
Operational margin under quantisation.
Networking alignment and reporting.
Choose so that matches the full-buffer level used by the QoS policy and matches the scheduler epoch or drain time. Report energy per spike and energy per detection alongside CPU baselines, and log histograms to confirm that delay quantisation preserves path ordering. These checks keep the deployed NOS consistent with the queueing semantics and timing used in the main text.
Appendix C Linear decay times and DC sensitivity
Eigenvalues and decay times.
Let be the eigenvalues of the local Jacobian defined in §5.2. When the Routh–Hurwitz conditions in (65) hold and , the node is an overdamped node with two real negative modes. When , it is underdamped with complex conjugate modes whose common decay rate is . The dominant linear return time is
(98) |
which provides an engineering handle for truncated BPTT windows and for selecting the reset constant so numerical timescales match device drain and scheduler epochs. In practice, estimated from small perturbations of around an operating point aligns well with the observed relaxation of the queue proxy .
Input–output small-signal gain.
Let be a small constant perturbation of and the induced steady-state change in . Using the equilibrium implicit-function relation from (61), the DC gain is
(99) |
Larger service and stronger damping decrease this gain, making the operating point less sensitive to slow offered-load drift. This matches queueing intuition and yields a direct calibration target: estimate from step tests or natural experiments in telemetry, then verify that the inferred is consistent with the Jacobian conditions in (65).
Appendix D NOS simulation pseudocode
For reproducibility we include the pseudocode used in our experiments (bin ms).
# NOS simulation with delays and shot-noise arrivals # Initialisation for each node i: v[i] = v_rest + small_noise() u[i] = u0 for each directed link (j -> i): if tau_ij > 0: init delay_buffer[j->i] # Main loop for t in time_steps: # 1) deliver delayed spikes scheduled for this time for each link (j -> i): S_delivered = pop(delay_buffer[j->i], t) # 0/1 spike I_syn[i] += g * W[i,j] * S_delivered # 2) exogenous input (e.g., shot noise, optionally smoothed) for each node i: I_ex[i] = eta_i(t) # may include I0 + gain * arrivals_i(t), low-pass with tau_s if used I[i] = I_syn[i] + I_ex[i] # 3) state updates (forward Euler, dt = 5 ms) for each node i: v_new = v[i] + dt * ( f_sat(v[i]) + beta*v[i] + gamma - u[i] + I[i] - lambda*v[i] - chi*(v[i] - v_rest) ) u_new = u[i] + dt * ( a*(b*v[i] - u[i]) - mu*u[i] ) v[i] = clamp(v_new, v_rest, v_max) u[i] = clamp(u_new, u_min, u_max) # 4) thresholding and reset (deterministic by default) for each node i: v_th_eff = v_th # or v_th + sigma * randn() if optional jitter is enabled if v[i] >= v_th_eff: emit_spike(i, t); record_time(i, t) for each outgoing link (i -> k): schedule(delay_buffer[i->k], t + tau_ik) # integer bin delay # soft exponential pullback and recovery kick v[i] = c + (v[i] - c) * exp(-r_reset * dt) u[i] = u[i] + d # 5) zero synaptic accumulator for next step for each node i: I_syn[i] = 0 # record observables: mean rate, ISI CV, avalanche sizes, synchrony
Appendix E Statistical robustness under noise
For each condition we estimated the mean firing rate and the inter-spike-interval coefficient of variation (CV) using non-parametric bootstrap resampling (hundreds of replicates). Bootstrap 95% intervals show that both and CV increase with shot-noise amplitude , particularly near threshold. In NOS, stronger stochastic drive raises overall excitability and increases irregularity, mirroring more variable queue dynamics.
Avalanche size distributions were then analysed. Figure 14 shows empirical complementary cumulative distributions with fitted power-law and log-normal overlays above a data-driven tail threshold . Tail parameters were estimated by maximum likelihood with truncation at ; goodness was assessed by the Kolmogorov–Smirnov distance and model choice by AIC. Across conditions, AIC favours the power-law, indicating scale-free-like behaviour near threshold. Fitted values are reported in Table S6.
The exponents display a clear trend: at low amplitude () the power-law exponent is (steep decay, small avalanches). At the exponent decreases to (broader tail). At the exponent drops to (heavy tail and a higher chance of large cascades). Thus, near the bifurcation, stronger stochastic drive allows small fluctuations to trigger system-wide congestion events.

KS | LL | AIC | KS | LL | AIC | Pref. | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.3 | 2.0 | 63 | 8.73 | 0.76 | 14.03 | -26.07 | 2.0 | 63 | 0.82 | 0.27 | 0.76 | -34.20 | 72.39 | PL |
0.6 | 4.0 | 31 | 6.43 | 0.58 | -27.24 | 56.48 | 4.0 | 31 | 1.57 | 0.31 | 0.58 | -46.15 | 96.31 | PL |
0.9 | 1.0 | 681 | 3.25 | 0.53 | -431.13 | 864.26 | 1.0 | 681 | 0.44 | 0.53 | 0.53 | -684.68 | 1373.35 | PL |
Avalanche definition and fitting.
Avalanche sizes were computed from population spike counts in 5 ms bins (the experimental ): an avalanche is a contiguous run of nonzero bins, with size equal to the total spikes in the run. For each condition, minimised the KS distance between empirical and model CDFs (tail only). Power-law tails used the continuous-tail MLE with model for . Log-normal tails were fit by MLE on with truncation at . Model selection used AIC with parameter counts 1 (power-law) and 2 (log-normal).
Appendix F Admissible parameter region and calibration conventions
Table 7 records the admissible region explored in our experiments. The entries are expressed on the experimental scale used throughout the paper, with queue fullness normalised to and a sampling bin of ms. Rates given “per bin” convert to per second by multiplying by . We separate structural constraints from initialisation choices so that the table serves reproducibility without implying narrow tuning.
The bounds have three roles. First, they enforce the normalisation and sampling assumptions used by the model and code. Second, they keep the sufficient stability condition from Lemma 2 satisfied with margin for the operating regimes we test. Third, they define a common hyperparameter space for stress tests across the three graph families. Within this region we draw values uniformly unless stated and refine only on the validation split.
Coupling and reset are disambiguated. The network coupling index is , where is the spectral radius after normalisation to and is the scalar applied during experiments. The symbol denotes the sharpness of the soft reset gate and is independent of . Delays are given in milliseconds and reflect link propagation on the chosen binning. Shot-noise parameters match the noise sensitivity experiments and allow drive strength to approach the near-threshold regime without violating the stability bound.
The ranges for control the shape and slope of the bounded excitability . The leak terms and set the small-signal decay about and are bounded to keep subthreshold dynamics stable at the ms resolution. Recovery parameters govern post-burst relaxation and are given as per-bin rates. Threshold , reset depth , recovery jump , and pullback speed determine spiking and refractory behaviour. The mapping from arrivals to effective input uses an offset and a dimensionless gain, and may be optionally low-pass filtered with .
These bounds are admissible rather than prescriptive. Defaults used in the main text fall within Table 7 and are reported in the parameter-initialisation table. Sensitivity studies widening each bound by a factor of two preserve the relative ordering of methods on F1 and latency, with the largest trade-offs arising from and as expected from precision–recall balance. All residual scalers and thresholds are fitted on train only, and validation is used to select per-node thresholds within the same admissible region.
Parameter | Interpretation | Admissible range (units) |
excitability scale in | (dimensionless) | |
saturation knee of | (dimensionless) | |
linear excitability gain | (dimensionless) | |
constant drive (baseline load) | (in units) | |
service/leak on | (per bin111Per-bin rates correspond to a ms bin width. Multiply by to convert to s-1.) | |
subthreshold damping about | (per bin) | |
recovery rate | (per bin) | |
recovery sensitivity to congestion | (dimensionless) | |
passive recovery decay | (per bin) | |
firing threshold | (in units) | |
sharpness of reset gate | (dimensionless) | |
pullback/reset speed | (per bin) | |
post-event baseline level | (in units) | |
recovery jump on event | (in units) | |
drive offset (arrivals ) | (in units) | |
gain | drive scale (arrivals ) | (dimensionless) |
optional drive smoothing | ms | |
coupling on (network) | choose so | |
Shot noise | arrival model for synthetic tests | rate Hz, amplitude |
Delays | link propagation | ms |
normalisation | spectral scaling | before applying |
Appendix G Global stability (conservative small-signal margin)
Recall that the spectral radius satisfies , and a sufficient topology-aware bound is given in (79). For capacity planning we use the conservative small-signal operational margin
(100) |
Maintaining under nominal load provides headroom against stochastic fluctuations. The margin is linear in with slope ; increases in subthreshold damping (or service ) raise the intercept by shrinking in magnitude.
Illustrative computation.
Using the legacy “starter” values (kept here for continuity) , , , , , , , , and ,
At the margin is , and raising from to changes from to , yielding —a substantial safety increase. Note: these numbers are purely illustrative for the quadratic surrogate; in the main experiments we use the admissible ranges in Table 7. The linear dependence on and the monotone effect of hold in either case.

Supporting sweeps.
Figure 15 shows against for several . The slope is by construction; increasing shifts the intercept upward (analogous to higher service slack). For the legacy starter set above the margin crosses zero near , consistent with
Increasing increases and hence the scalar margin. Full sweeps and Monte Carlo robustness are provided for reference.
Appendix H Conservative stability via Gershgorin and norm bounds
Let with and the spectral radius after normalisation (we use in experiments). Linearising (9)–(10) at gives a block Jacobian . For the top block rows ,
is the diagonal entry, and the off-diagonals are for , plus the coupling to with derivative . Thus the Gershgorin centre and radius are
A sufficient condition for those discs to lie strictly in the left half-plane is
which yields
For the bottom block rows , the centre is and the radius is , so a sufficient condition is
Using gives the norm variant
These conditions are sufficient (not necessary) and are conservative relative to observed thresholds.
(meas.) | (Gersh.) | (scalar) | |||||
---|---|---|---|---|---|---|---|
0.5 | 1.881799 | 0.940900 | 0.554642 | 0.940878 | 1.000023 | True | |
1.0 | 0.940924 | 0.940924 | 0.277321 | 0.940878 | 1.000049 | True | |
1.5 | 0.627452 | 0.941177 | 0.184881 | 0.940878 | 1.000319 | True | |
2.0 | 0.470484 | 0.940969 | 0.138660 | 0.940878 | 1.000097 | True | |
3.0 | 0.314042 | 0.942125 | 0.092440 | 0.940878 | 1.001326 | True |
Table 8 complements the figures: the measured thresholds obey the Perron-mode scaling (), while Gershgorin discs provide only conservative certificates. From a networking standpoint, the stability boundary is captured by interpretable quantities: the effective small-signal slope (through ), service and recovery setting , and the graph’s spectral radius encoding topology. This bridges low-level queue dynamics with high-level network design, showing how structural and service parameters jointly control the emergence of congestion instabilities.
Appendix I Finite-size sharpening and synchrony order parameter
Finite networks smooth the bifurcation transitions seen in the scalar reduction. We quantify collective timing with a Kuramoto-type order parameter
where spike phases are obtained by linear phase interpolation between successive spikes of node (phase resets to at a spike, advances to at the next). Figure 16 plots the time average against the effective coupling for several network sizes . The curves show the expected S-shaped rise: for small the system is desynchronised and is low; once passes a critical value the order parameter increases rapidly and then saturates.
The dashed line marks the Perron-mode prediction from the linear analysis of NOS. Across , measured onsets cluster near . Finite-size effects smooth the transition for small and sharpen it as grows, while the turning point remains anchored by .
Networking interpretation.
Synchrony corresponds to queueing cycles that align across nodes: burst build-up and drain events become temporally coordinated. Below these cycles are weakly correlated and buffers clear largely independently. Near and above , coupling is strong enough for delayed spikes to propagate and reinforce, producing network-wide burst trains. Because , stability can be preserved either by reducing the local coupling gain (e.g. weight attenuation or stronger subthreshold leak) or by reducing the spectral radius (e.g. reweighting or sparsifying high-gain pathways). Both actions move the operating point away from the synchrony threshold.

64 | 0.0 | 0.100 |
---|---|---|
64 | 1.0 | 0.207 |
64 | 2.0 | 0.822 |
128 | 0.0 | 0.059 |
128 | 1.0 | 0.148 |
128 | 2.0 | 0.769 |
256 | 0.0 | 0.061 |
256 | 1.0 | 0.082 |
256 | 2.0 | 0.755 |