Before AI and neural nets, the excitement was about iterative learning control (ILC): the idea to train robots to perform repetitive tasks, or train a system to reject quasi-periodic disturbances. The excitement waned after the discovery of “bad learning transients” in systems which satisfy the ILC asymptotic convergence stability criteria. The transients may be of long duration, persisting long after eigenvalues imply they should have decayed, and span orders of magnitude. The field recovered with the introduction of tests for “monotonic convergence of the vector norm”, but no deep and truly satisfying explanation was offered.
Since 2016, this author has demonstrated that an entirely new class of solutions, namely solitons, satisfy the recurrence equations of ILC and offer a deep explanation of “bad learning”. A soliton is a wave-like object that emerges in a dispersive medium that travels with little or no change of shape at an identifiable speed. This paper is the first public presentation of the soliton solutions, which may occur for both causal (i.e. look back) and acausal (i.e. look ahead) learning functions that have diagonal band structure for their matrix representation.