Anthony Polloreno, Ph.D.

@ampolloreno

Engineer

The Impact of Noise on Recurrent Neural Networks II

In this section, we simulate the echo state networks discussed in the last post. This problem has some unusual constraints, particularly around variable training lengths. After some exploration (detailed in the appendix), I found a better strategy: simulate ensembles of reservoirs.

Each reservoir (and each size) ideally requires different training lengths to avoid over- or underfitting, but even when we ignore this, size still shows performance gains. The ensemble approach also introduces a natural axis of parallelization. To support this, we chunk the input time series—loading it fully into GPU memory wastes valuable space needed for simulating activations.

Our main goal is to study echo state networks with output product signals and examine how noise affects their performance. Noise is ubiquitous: digital systems typically mitigate it with error correction, but native ML hardware doesn't enjoy the same luxury.

Small float precision can introduce rounding errors that accumulate, especially in recurrent systems. Thermal and sensor noise can further degrade signal fidelity. These are practical challenges in real-world systems that deal with analog or probabilistic processes.

While this post isn't focused on reinforcement learning, it's worth noting the parallel. In REINFORCE-style algorithms (e.g., Williams, 1992), sampled data is used to estimate policy gradients in expectation. These estimates can suffer from noise due to both sample limitations and model imperfections. We use a simpler approach here: we just inject Gaussian noise into the output values of the network.

Check out the next notebook here!

Acknowledgements

Thanks to Alex Meiburg, André Melo, and Eric Peterson for their helpful feedback on this post.