The Impact of Noise on Recurrent Neural Networks I
In this post, we study reservoir computing—a subset of recurrent neural networks (RNNs) used for complex temporal processing tasks. We'll examine what happens when noise is introduced into these systems.
Reservoir computing is especially interesting for dynamics prediction in physical systems, though it can also be adapted to tasks like language modeling. Compared to transformer-based methods, it’s academically underexplored and pedagogically simple—ideal for learning. Training involves just a linear layer, with the recurrent structure derived from a random matrix, making ESNs easy to interpret.
We’ll focus on echo state networks (ESNs), which include a few tunable parameters: sparsity, time-step bleedthrough, an encoding map, a decoding map, a transition matrix, and reservoir size. Interestingly, even with random initialization of the encoding and transition maps, ESNs can effectively predict sequences using a simple linear decoding layer—solvable with least squares.
The goal of this tutorial is to understand how noise impacts the computational power of reservoir systems, especially as a function of size. The analysis extends to many analog systems with continuous-valued internal states. We'll show how noise significantly degrades computational capability, and kick things off with a notebook introducing the model (ESNs) and the task (NARMA10).
Check out the next notebook here!
Acknowledgements
Thanks to Alex Meiburg, André Melo, and Eric Peterson for their thoughtful feedback on this post.