Next Article in Journal
Rule-Enhanced Active Learning for Semi-Automated Weak Supervision
Previous Article in Journal
Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Abstract Reservoir Computing

by
Christoph Walter Senn
1,2,*,† and
Itsuo Kumazawa
2,†
1
Institute of Applied Mathematics and Physics, School of Engineering, Zurich University of Applied Sciences, 8401 Winterthur, Switzerland
2
Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Tokyo 152-8550, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
AI 2022, 3(1), 194-210; https://doi.org/10.3390/ai3010012
Submission received: 31 January 2022 / Revised: 3 March 2022 / Accepted: 5 March 2022 / Published: 10 March 2022
(This article belongs to the Section AI Systems: Theory and Applications)

Abstract

:
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations.

1. Introduction

In recent years, physical reservoir computing has enjoyed increased popularity in a wide variety of fields. Since the emergence of reservoir computing as a computational framework for RNNs in the form of echo state networks [1] and liquid state machines [2], it has since transcended the digital world, and has found various applications in physical systems. This is due to the nature of the fixed reservoir, which allows the usage of other dynamical systems, given that they exhibit certain properties, as reservoirs. Over the years, reservoir computing systems have used buckets of water [3], light [4,5], and soft robots [6,7]. This development has given rise to the field of physical reservoir computing. Recent advances in this field, like the Origami reservoir [8] or mass-spring networks [9], have extensively used numerical simulations as part of their research.
Depending on the application, numerical simulations are inevitable; think of systems that work in difficult-to-access environments, where a loss can be potentially hazardous or expensive, or where building is time-consuming and costly. Although such numerical simulations have improved in fidelity, it is not possible to accurately represent all facets of physical systems in simulated environments. This, in part, leads to a gap between simulations and reality, also called the sim2real gap.
In addition to errors caused by differences in fidelity, hardware issues are also a source of concern. Sensors might break, misbehave, or become susceptible to noise. Accounting for all possible sources of errors is difficult and time-consuming, but necessary to create reliable and safe systems. Fortunately, the field of formal verification for software systems gives us tools to handle the complexity of such situations.
One of these tools, abstract interpretation [10], helps in dealing with such uncertainties, by enveloping them in abstract objects. This idea has also recently found its way to the neural network community for verification [11,12,13], but also for training [13,14]. Abstracting single data-points with sets of points, e.g., by encompassing them in a hypercube, as illustrated in Figure 1, it becomes possible to work with the complete neighbourhood of such data-points, without having to sample all points that lie around it.
By using simple interval arithmetic [15], Senn and Kumazawa [16] have shown how this idea can be leveraged to train robust echo state networks. We build on this approach and show how it could be applied to create robust physical reservoir computing systems in simulations. Our contributions are:
  • An abstract regulariser leading to robust weights for reservoir computing systems
  • A closed form solution for the regression problem, using the abstract regulariser
  • Numerical study on the robustness of physical reservoir computing systems against different types of errors
Physical reservoir computing in the form of mass-spring networks and how we use abstraction to improve the robustness of such systems is introduced in Section 2; furthermore, the datasets and types of errors considered, along with the general experimental setup, are shown. Then, the achieved results are introduced and discussed in Section 3. Finally, in Section 4, we give a conclusion and an outlook for future research endeavours.

2. Materials and Methods

In this section, we provide a brief introduction to reservoir computing (see Section 2.1), a physical implementation based on mass-spring networks (see Section 2.2), and how we can improve the robustness of such systems against noise introduced by the sim2real gap (see Section 2.3).

2.1. Reservoir Computing

Reservoir computing revolves around the exploitation of dynamical systems, called reservoirs, for computation. In terms of machine learning, we can divide the whole approach into two phases: the training (see Section 2.1.1) phase, in which teacher forcing is employed, and an exploitation phase (see Section 2.1.2), in which we use the dynamical system for our computations (e.g., predictions). Following this, we will provide a brief introduction to how the training and exploitation in reservoir computing works, by using a nonlinear map of the form:
x t + 1 = φ ( A x t + B u t ) ,
with column vectors x t R d and u t R f ; matrices A R d x d and B R d x f ; a nonlinear function φ : R d R d (e.g., the hyperbolic tangent); and time-steps t = 1 T . In addition, we enforced the spectral radius of A to be ρ ( A ) < 1 ; this allowed the system to exhibit some kind of short-term memory. The rationale behind using a map as an example is that neural network or physics simulation-based approaches to reservoir computing can be reduced to discrete dynamical systems of this form.

2.1.1. Training

The training of reservoir computing systems was done in a supervised fashion; as such, we needed next the input signal u, a target signal y as ground truth. We further defined a washout time 0 τ < T , which is the time that the system needs until its state x is entirely dependent on the input u. Using Equation (1), we then drove the dynamical system using the input signal u, and collected the state x t as row vectors for each time-step t > τ into a matrix. To conclude the training, the following equation was solved for the column vector w R d :
y = x τ + 1 x τ + 2 x T w ,
this is usually done using linear regression techniques.

2.1.2. Exploitation

Once we have calculated our output weights w, we calculated an output y t ^ with:
y ^ t = w T x t ,
depending on the application, two ways of driving a reservoir computing system are possible. Either in an open-loop of the form:
x t + 1 = φ ( A x t + B u t ) y ^ t + 1 = w T x t + 1 ,
or in a closed-loop:
x t + 1 = φ ( A x t + B y ^ t ) y ^ t + 1 = w T x t + 1 .
As can be seen in Equations (4) and (5), the difference between a closed- and open-loop setup is how the input is generated. In the closed-loop setup, outputs were reused as inputs in the next time-step.

2.2. Mass-Spring Networks

Mass-spring networks are, in principle, coupled nonlinear oscillators that have been popularised by Hauser et al. [9], but also have been proposed by Couloumbe et al. [17]. Such systems can be used to approximate a variety of materials in simulations, like fabric [18]; compliant materials, as used in soft-robotics [19], or flesh-like setups [20].
We use a mass-spring system as shown in Figure 2, with nonlinear springs exhibiting a spring force of the following form:
f ( Δ l ) = t a n h ( k Δ l ) ,
Δ l being the spring displacement and k the spring constant. This emulates a compliant, elastic material with a force displacement curve, as shown in Figure 3.
Figure 2. Visualisation of a mass-spring system as used in our experiment. The dark blue squares on the corners represent fixed masses (to fixate the network in space); the green circle in the middle is the input mass (where force is applied as input), the light blue circles are masses, and the black lines represent the nonlinear springs, connecting masses with each other.
Figure 2. Visualisation of a mass-spring system as used in our experiment. The dark blue squares on the corners represent fixed masses (to fixate the network in space); the green circle in the middle is the input mass (where force is applied as input), the light blue circles are masses, and the black lines represent the nonlinear springs, connecting masses with each other.
Ai 03 00012 g002
Figure 3. The force displacement curve of the springs used in the simulation at different spring constants k.
Figure 3. The force displacement curve of the springs used in the simulation at different spring constants k.
Ai 03 00012 g003
To exploit such a system for computation, an input signal u is translated to a force f and applied to predetermined input masses (green in Figure 2). The network then starts to oscillate accordingly, and we can record the mass accelerations as the state x (cf. Equation (2)). The recorded states can then be used for training and exploitation, as described in Section 2.1.1 and Section 2.1.2.

2.3. Abstract Reservoir Computing

When physically building mass-spring systems, as introduced in Section 2.2, we have to deal with tolerances due to imperfections in the creation process. Therefore, the initial position of masses diverted from the positions was assumed in the simulation; this is visualized in Figure 4. As a first step to deal with this problem, we can replace each component p i of the location vector p of a mass, with an interval or ball of the form ( p i , c e n t r e , p i , r a d i u s ) , representing the possible positions of the mass (red rectangle in Figure 4). We call this a hyperrectangle or, in abstract interpretation terminology, a box. This abstraction was then directly used in the simulation using ball arithmetic [21,22]. Instead of concrete numbers for our states x t , we then got state tuples of the form ( x t , c e n t r e , x t , r a d i u s ) , which we collected in the matrices X c and X r , respectively. Senn and Kumazawa [16] proposed to use the additional information as constraints for the linear regression and use a splitting conic solver [23] to solve:
a r g m i n w X c w y c s . t . | w | X r y r .
This approach has the advantage of having exact upper error bounds encoded in y r a d i u s , which represent the maximum desired deviation from the concrete solution given as y c e n t r e , but using a solver, slows down the training significantly. By relaxing the requirement for an upper error bound, we can reformulate Equation (7) as follows:
a r g m i n w X c w y c 2 + X r w 2 ,
Reformulating this as the cost function L allows us to derive a closed form solution, as shown in Equations (9) and (10):
L = ( X c w y c ) 2 + ( X r w ) 2 d L d w = 2 X c T X c w X c 2 y c + 2 X r T X r w ,
then, setting the derivative d L d w equal to 0, we can solve for w:
0 = 2 X c T X c w X c 2 y c + 2 X r T X r w w = ( X c T X c + X r T X r ) 1 X c T y
Using Equation (10), instead of solving Equation (7), we trade assurance of error bounds for a significant speed-up.

2.4. Experimental Setup

To evaluate our proposed approach, we implemented a numerical simulation of a mass-spring network using Julia 1.5 [24], and tested it with three datasets in open- and closed-loop setup for different types of error sources. The benchmark datasets are the same ones used by Goudarzi et al. [25], and were precomputed for 5000 time-steps, then each point in each time-series was repeated 5 times (giving time-series with 15,000 data points); finally, we split them into training and testing sets. The training sets consisted of the first 10,000 time-steps, whereas for the testing sets, the remaining 5000 time-steps were used.

2.4.1. Mass-Spring Network

We use a mass-spring network, as depicted in Figure 2. All masses are aligned to a regular grid first, and then slightly displaced with a value Δ p U 2 ( 0.25 , 0.25 ) . Then, each mass is connected with its 8-neighbourhood through non-linear springs based on Equation (6). The four corner masses (blue) are fixed, and the input signal is applied as a force to a single input mass (green). As sensor readings, we use the acceleration of each mass in the network.

2.4.2. Hénon Time-Series

The Hénon time-series is based on the Hénon map introduced in 1976 [26]. Equation (11) was used to compute the time-series.
y t = 1 1.4 y t 1 2 + 0.3 y t 2 + N ( 0 , 0.001 )
The Hénon time-series, as used in the experiments, is shown in Figure 5.

2.4.3. NARMA10 Time-Series

Non-linear autoregressive moving average (NARMA) tasks are widely used in the reservoir computing community as basic benchmarks. NARMA10 specifically is one of the most used benchmarks for reservoir computing and is defined as:
y t = 0.3 y t 1 + 0.05 y t 1 i = 1 10 y t i + 1.5 u t 10 u t 1 + 0.1
with u t U ( 0 , 0.5 ) being drawn from a uniform distribution.
The NARMA10 time-series, as used in the experiments, is shown in Figure 6.

2.4.4. NARMA20 Time-Series

The NARMA20 task is the same as the NARMA10 one, except with longer time dependencies and an additional non-linearity:
y t = t a n h ( 0.3 y t 1 + 0.05 y t 1 i = 1 20 y t i + 1.5 u t 20 u t 1 + 0.1 )
The NARMA20 time-series, as used in the experiments, is shown in Figure 7.

2.4.5. Baselines

We compare the results of our proposed approach to the following two baselines:
  • Training with ridge regression (classical model)
  • Training with linear regression and added noise (noise model)
Using the sensor readings x τ + 1 , , x T from the training simulation, the classical model is trained using Equation (14), and the noise model’s training is based on Equation (15), with ϵ set to the amplitude of the current error parameter.
X = x τ + 1 x τ + 2 x T w = ( X T X + 0.001 I ) 1 X T y
X = x τ + 1 x τ + 2 x T w = ( X + U ( ϵ , ϵ ) ) y

2.4.6. Sensor Augmentations

We tested each model under different types of errors that could potentially occur in a real-world scenario.
  • Sensor Failure
    -
    Before testing, masses were randomly selected with a given probability p, and their readings were forced to 0 during testing.
  • Sensor Noise
    -
    Gaussian noise with 0-mean and varying standard deviation σ is added during testing.
  • Fixed Sensor Displacement
    -
    Sensor readings were displaced by a fixed value z.
  • Mass Position Displacement
    -
    The mass positions were randomly displaced by a random vector Δ x [ k ; k ] 2 .
Each possible source of error was tested independently of other error sources. The parameter ranges are shown in Table 1.

3. Results and Discussion

For each experiment, we measured the mean squared error (MSE), given by Equation (16), between the generated outputs y ^ by the system and the values y in test sets. As can be seen in Figure 8, Figure 9, Figure 10 and Figure 11, our proposed approach scores better regarding the MSE compared to the classical approach in all experiments. Compared to the training regime with noise, at high enough noise levels, it takes over or performs equally to our proposed training regime. Exact numbers can be found in the Appendix A in Table A1, Table A2, Table A3 and Table A4.
M S E = 1 N i N ( y i y ^ i ) 2
When looking at the results when no noise is present in the input, our proposal surpasses both baselines, in any case. This can indicate that the abstract regulariser is generally a better choice than L 2 -regularisation. Considering the results of the experiments with noise present, our proposed approach gives more consistent results over the whole spectrum of noise amplitudes, except for sensor failures, i.e., missing sensor readings. In this case, both baselines also perform poorly.
One reason for the better performance, even without noise present, is the fact that ball arithmetic [21], as used in the experiments, also captures numerical imprecisions. This can be compared to adding noise to the system, and thus help against overfitting. Looking at the standard deviations of the results (Table A1, Table A2, Table A3 and Table A4), we see that the abstract training regime leads to fewer variations of the output, indicating that our approach is more robust against randomness.

4. Conclusions and Outlook

Although physical reservoir computing systems are becoming more and more important, and most experiments are based on numerical simulations, the direct transfer of trained systems from simulation to the real world has not been widely studied yet. We proposed a new training regime based on abstract interpretation to address some of the issues that are to be expected, such as building tolerances, sensor defects, noise, etc., and verified our approach in a series of simulated experiments. The results support the use of our abstract regulariser, not only in this setting, but also in general, as it achieved lower error rates. This is most likely due to the interval arithmetic used in our experiments. When adding noise, the difference to the classical approach with L 2 -regularisation becomes even more significant. In contrast, the training regime with added noise becomes better the more noise is added. Therefore, in settings where high noise amplitudes are to be expected, this approach seems to be the better choice.
For future works, we see a direct comparison between a physical reservoir computing system in simulations and real-life, e.g., based on fabrics, as this comes close to the mass-spring systems used in the experiments. Another direction could be the nature of noise captured by the abstraction. Currently, we only consider uniformly distributed abstractions—but most problems in nature are differently distributed, e.g., are from a normal distribution. Being able to use different distributions for the abstractions would give the ability to tailor the training regime specifically to the problems at hand.
In either case, both directions will allow for a more widespread use of physical reservoir computing, as it would allow for rapid iterations in simulations and a direct transfer of results to the real world.

Author Contributions

Conceptualization, C.W.S.; methodology, C.W.S.; software, C.W.S.; validation, C.W.S.; formal analysis, C.W.S.; investigation, C.W.S.; resources, C.W.S.; data curation, C.W.S.; writing—original draft preparation, C.W.S.; writing—review and editing, C.W.S. and I.K.; visualisation, C.W.S.; supervision, I.K.; funding acquisition, C.W.S. All authors have read and agreed to the published version of the manuscript.

Funding

Open access funding provided by ZHAW Zurich University of Applied Sciences.

Acknowledgments

The authors thank Carmen Mei Ling Frischknecht-Gruber of the Zurich University of Applied Sciences for her comments and proofreading while creating this manuscript.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Results

Table A1, Table A2, Table A3 and Table A4 show additional statistics averaged over 10 runs for each experiment conducted.
Table A1. Average MSE for the experiments simulating failing sensors over 10 experiment runs. The parameter gives the fraction of sensors that were forced to 0.
Table A1. Average MSE for the experiments simulating failing sensors over 10 experiment runs. The parameter gives the fraction of sensors that were forced to 0.
DatasetModelSensor Failure MSE ± Std Dev@Parameter Value
0.000.010.020.03
Hénonabstract 5.11 × 10 1 ± 3.10 × 10 4 5.40 × 10 1 ± 8.56 × 10 3 5.96 × 10 1 ± 1.24 × 10 1 1.20 ± 1.35
Hénonclassical 5.88 × 10 1 ± 3.17 × 10 2 1.35 × 10 1 ± 2.69 × 10 1 8.57 ± 1.54 × 10 1 3.43 × 10 1 ± 4.09 × 10 1
Hénonnoise 5.83 × 10 1 ± 2.38 × 10 2 2.09 ± 2.74 1.69 ± 3.13 2.51 ± 3.56
NARMA10abstract 1.38 × 10 2 ± 3.88 × 10 5 1.51 × 10 2 ± 8.40 × 10 4 1.55 × 10 2 ± 4.69 × 10 4 1.59 × 10 2 ± 1.66 × 10 3
NARMA10classical 9.99 × 10 2 ± 4.08 × 10 2 6.08 × 10 1 ± 1.14 × 10 2 2.13 × 10 2 ± 2.20 × 10 2 1.60 × 10 2 ± 1.61 × 10 2
NARMA10noise 2.35 × 10 1 ± 1.19 × 10 1 5.60 × 10 2 ± 7.01 × 10 2 5.46 ± 1.12 × 10 1 1.69 × 10 2 ± 3.29 × 10 3
NARMA20abstract 1.88 × 10 3 ± 3.67 × 10 5 2.88 × 10 3 ± 3.20 × 10 4 2.68 × 10 3 ± 1.95 × 10 4 2.67 × 10 3 ± 3.33 × 10 4
NARMA20classical 1.21 × 10 1 ± 6.16 × 10 2 2.49 × 10 1 ± 3.76 × 10 1 4.11 × 10 1 ± 4.70 × 10 1 4.95 × 10 1 ± 9.20 × 10 1
NARMA20noise 2.63 × 10 1 ± 1.43 × 10 1 1.09 × 10 2 ± 4.58 × 10 3 4.49 × 10 3 ± 7.36 × 10 4 1.72 × 10 1 ± 5.05 × 10 1
DatasetModelSensor Failure MSE ± Std Dev@Parameter Value
0.040.050.060.07
Hénonabstract 1.38 ± 1.77 1.69 ± 2.36 4.81 ± 7.96 9.43 ± 1.33 × 10 1
Hénonclassical 2.36 × 10 1 ± 2.17 × 10 1 4.75 × 10 1 ± 5.20 × 10 1 7.42 × 10 1 ± 5.73 × 10 1 5.20 × 10 1 ± 6.11 × 10 1
Hénonnoise 2.01 ± 4.35 1.95 ± 3.86 2.99 × 10 1 ± 8.65 × 10 1 1.61 × 10 1 ± 4.46 × 10 1
NARMA10abstract 1.61 × 10 2 ± 1.20 × 10 3 1.66 × 10 2 ± 1.82 × 10 3 4.46 × 10 2 ± 7.64 × 10 2 2.05 × 10 2 ± 4.45 × 10 3
NARMA10classical 2.31 × 10 2 ± 1.64 × 10 2 2.59 × 10 2 ± 1.51 × 10 2 3.07 × 10 2 ± 2.39 × 10 2 3.08 × 10 2 ± 2.19 × 10 2
NARMA10noise 9.72 × 10 1 ± 2.87 1.49 × 10 2 ± 8.54 × 10 4 1.34 ± 3.99 2.78 ± 4.31
NARMA20abstract 2.98 × 10 3 ± 3.64 × 10 4 2.73 × 10 3 ± 3.37 × 10 4 2.91 × 10 3 ± 4.38 × 10 4 3.26 × 10 3 ± 5.69 × 10 4
NARMA20classical 8.58 × 10 1 ± 7.37 × 10 1 4.94 × 10 1 ± 4.46 × 10 1 9.63 × 10 1 ± 5.72 × 10 1 1.38 × 10 2 ± 1.25 × 10 2
NARMA20noise 2.95 × 10 3 ± 4.11 × 10 4 5.46 × 10 2 ± 1.55 × 10 1 2.31 × 10 3 ± 1.37 × 10 4 2.21 × 10 3 ± 1.08 × 10 4
DatasetModelSensor Failure MSE ± Std Dev@Parameter Value
0.080.090.10
Hénonabstract 1.10 × 10 1 ± 1.91 × 10 1 2.81 × 10 1 ± 3.49 × 10 1 1.75 × 10 1 ± 3.93 × 10 1
Hénonclassical 6.89 × 10 1 ± 1.30 × 10 2 1.77 × 10 2 ± 2.03 × 10 2 7.70 × 10 1 ± 1.06 × 10 2
Hénonnoise 2.32 × 10 1 ± 6.62 × 10 1 5.97 × 10 1 ± 7.35 × 10 1 1.81 × 10 1 ± 5.20 × 10 1
NARMA10abstract 4.53 × 10 1 ± 1.26 7.96 × 10 2 ± 1.77 × 10 1 6.39 × 10 2 ± 5.86 × 10 2
NARMA10classical 3.82 × 10 2 ± 1.75 × 10 2 2.41 × 10 2 ± 1.04 × 10 2 2.07 × 10 2 ± 1.66 × 10 2
NARMA10noise 1.42 × 10 2 ± 3.38 × 10 4 1.39 × 10 2 ± 5.19 × 10 4 9.00 × 10 1 ± 2.66
NARMA20abstract 3.04 × 10 3 ± 4.95 × 10 4 3.00 × 10 3 ± 3.61 × 10 4 3.46 × 10 3 ± 4.50 × 10 4
NARMA20classical 7.15 × 10 1 ± 6.04 × 10 1 1.17 × 10 2 ± 9.58 × 10 1 1.16 × 10 2 ± 9.45 × 10 1
NARMA20noise 2.25 × 10 3 ± 1.34 × 10 4 2.15 × 10 3 ± 2.44 × 10 4 2.05 × 10 3 ± 1.46 × 10 4
Table A2. Average MSE for the experiments simulating sensor noise over 10 experiment runs. The parameter gives the amplitude of the noise added to the sensor readings.
Table A2. Average MSE for the experiments simulating sensor noise over 10 experiment runs. The parameter gives the amplitude of the noise added to the sensor readings.
DatasetModelSensor Noise MSE ± Std Dev@Parameter Value
0.000.010.020.03
Hénonabstract 5.11 × 10 1 ± 3.10 × 10 4 5.36 × 10 1 ± 6.54 × 10 3 5.43 × 10 1 ± 1.42 × 10 2 5.42 × 10 1 ± 1.42 × 10 2
Hénonclassical 5.88 × 10 1 ± 3.17 × 10 2 6.10 × 10 1 ± 5.59 × 10 2 5.82 × 10 1 ± 3.28 × 10 2 6.07 × 10 1 ± 3.15 × 10 2
Hénonnoise 5.83 × 10 1 ± 2.38 × 10 2 1.28 ± 1.35 5.92 × 10 1 ± 5.49 × 10 2 5.50 × 10 1 ± 3.63 × 10 2
NARMA10abstract 1.38 × 10 2 ± 3.88 × 10 5 1.50 × 10 2 ± 5.35 × 10 4 1.52 × 10 2 ± 4.91 × 10 4 1.56 × 10 2 ± 6.17 × 10 4
NARMA10classical 9.99 × 10 2 ± 4.08 × 10 2 1.21 × 10 1 ± 2.87 × 10 2 1.49 × 10 1 ± 8.47 × 10 2 1.74 × 10 1 ± 9.33 × 10 2
NARMA10noise 2.35 × 10 1 ± 1.19 × 10 1 2.01 × 10 2 ± 1.09 × 10 3 1.64 × 10 2 ± 7.40 × 10 4 1.58 × 10 2 ± 5.47 × 10 4
NARMA20abstract 1.88 × 10 3 ± 3.67 × 10 5 2.91 × 10 3 ± 3.86 × 10 4 2.91 × 10 3 ± 4.49 × 10 4 2.70 × 10 3 ± 2.75 × 10 4
NARMA20classical 1.21 × 10 1 ± 6.16 × 10 2 1.13 × 10 1 ± 6.94 × 10 2 1.17 × 10 1 ± 4.06 × 10 2 1.19 × 10 1 ± 4.61 × 10 2
NARMA20noise 2.63 × 10 1 ± 1.43 × 10 1 9.48 × 10 3 ± 2.21 × 10 3 4.97 × 10 3 ± 7.51 × 10 4 3.55 × 10 3 ± 6.72 × 10 4
DatasetModelSensor Noise MSE ± Std Dev@Parameter Value
0.040.050.060.07
Hénonabstract 5.35 × 10 1 ± 2.98 × 10 3 5.41 × 10 1 ± 1.04 × 10 2 5.35 × 10 1 ± 6.49 × 10 3 5.39 × 10 1 ± 6.83 × 10 3
Hénonclassical 6.10 × 10 1 ± 3.41 × 10 2 6.10 × 10 1 ± 3.51 × 10 2 6.55 × 10 1 ± 6.32 × 10 2 6.46 × 10 1 ± 8.54 × 10 2
Hénonnoise 5.38 × 10 1 ± 1.55 × 10 2 5.32 × 10 1 ± 1.34 × 10 2 5.31 × 10 1 ± 9.41 × 10 3 5.25 × 10 1 ± 1.00 × 10 2
NARMA10abstract 1.56 × 10 2 ± 7.24 × 10 4 1.56 × 10 2 ± 7.65 × 10 4 1.58 × 10 2 ± 5.73 × 10 4 1.62 × 10 2 ± 9.39 × 10 4
NARMA10classical 3.03 × 10 1 ± 2.94 × 10 1 4.16 × 10 1 ± 3.04 × 10 1 2.98 × 10 1 ± 2.25 × 10 1 6.00 × 10 1 ± 3.36 × 10 1
NARMA10noise 1.52 × 10 2 ± 6.63 × 10 4 1.48 × 10 2 ± 6.42 × 10 4 1.47 × 10 2 ± 2.84 × 10 4 1.46 × 10 2 ± 4.15 × 10 4
NARMA20abstract 2.87 × 10 3 ± 3.53 × 10 4 2.93 × 10 3 ± 3.31 × 10 4 3.12 × 10 3 ± 5.87 × 10 4 2.93 × 10 3 ± 3.55 × 10 4
NARMA20classical 1.67 × 10 1 ± 1.07 × 10 1 1.55 × 10 1 ± 7.66 × 10 2 1.53 × 10 1 ± 6.87 × 10 2 2.18 × 10 1 ± 1.51 × 10 1
NARMA20noise 3.00 × 10 3 ± 3.13 × 10 4 2.61 × 10 3 ± 2.11 × 10 4 2.30 × 10 3 ± 9.01 × 10 5 2.25 × 10 3 ± 1.21 × 10 4
DatasetModelSensor Noise MSE ± Std Dev@Parameter Value
0.080.090.10
Hénonabstract 5.34 × 10 1 ± 8.08 × 10 3 5.40 × 10 1 ± 8.01 × 10 3 5.38 × 10 1 ± 7.52 × 10 3
Hénonclassical 6.24 × 10 1 ± 4.45 × 10 2 6.67 × 10 1 ± 1.12 × 10 1 8.04 × 10 1 ± 2.75 × 10 1
Hénonnoise 5.24 × 10 1 ± 3.89 × 10 3 5.20 × 10 1 ± 2.76 × 10 3 5.20 × 10 1 ± 3.61 × 10 3
NARMA10abstract 1.59 × 10 2 ± 7.09 × 10 4 1.67 × 10 2 ± 1.62 × 10 3 1.66 × 10 2 ± 7.81 × 10 4
NARMA10classical 8.15 × 10 1 ± 4.14 × 10 1 1.01 ± 6.59 × 10 1 1.15 ± 8.90 × 10 1
NARMA10noise 1.45 × 10 2 ± 4.38 × 10 4 1.45 × 10 2 ± 2.60 × 10 4 1.41 × 10 2 ± 2.49 × 10 4
NARMA20abstract 3.05 × 10 3 ± 7.32 × 10 4 3.14 × 10 3 ± 3.92 × 10 4 3.45 × 10 3 ± 2.25 × 10 4
NARMA20classical 2.15 × 10 1 ± 8.25 × 10 2 2.10 × 10 1 ± 1.12 × 10 1 3.49 × 10 1 ± 1.67 × 10 1
NARMA20noise 2.17 × 10 3 ± 7.56 × 10 5 2.15 × 10 3 ± 1.45 × 10 4 2.04 × 10 3 ± 1.05 × 10 4
Table A3. Average MSE for the experiments simulating a sensor reading shift over 10 experiment runs. The parameter gives the shift that was added to the sensor readings.
Table A3. Average MSE for the experiments simulating a sensor reading shift over 10 experiment runs. The parameter gives the shift that was added to the sensor readings.
DatasetModelSense or Shift MSE ± Std Dev@Parameter Value
0.000.010.020.03
Hénonabstract 5.11 × 10 1 ± 3.10 × 10 4 5.38 × 10 1 ± 1.40 × 10 2 5.41 × 10 1 ± 9.89 × 10 3 5.40 × 10 1 ± 1.22 × 10 2
Hénonclassical 5.88 × 10 1 ± 3.17 × 10 2 6.14 × 10 1 ± 8.34 × 10 2 5.93 × 10 1 ± 6.51 × 10 2 2.87 ± 4.44
Hénonnoise 5.83 × 10 1 ± 2.38 × 10 2 8.87 × 10 1 ± 8.11 × 10 1 5.71 × 10 1 ± 5.64 × 10 2 5.47 × 10 1 ± 3.61 × 10 2
NARMA10abstract 1.38 × 10 2 ± 3.88 × 10 5 1.51 × 10 2 ± 5.03 × 10 4 1.52 × 10 2 ± 4.06 × 10 4 1.56 × 10 2 ± 9.01 × 10 4
NARMA10classical 9.99 × 10 2 ± 4.08 × 10 2 9.94 × 10 1 ± 2.71 1.48 × 10 1 ± 1.51 × 10 1 2.89 × 10 1 ± 2.88 × 10 1
NARMA10noise 2.35 × 10 1 ± 1.19 × 10 1 2.11 × 10 2 ± 5.82 × 10 3 1.80 × 10 2 ± 2.44 × 10 3 1.57 × 10 2 ± 6.76 × 10 4
NARMA20abstract 1.88 × 10 3 ± 3.67 × 10 5 3.13 × 10 3 ± 3.54 × 10 4 2.72 × 10 3 ± 2.57 × 10 4 3.10 × 10 3 ± 6.85 × 10 4
NARMA20classical 1.21 × 10 1 ± 6.16 × 10 2 1.22 × 10 1 ± 6.48 × 10 2 1.19 × 10 1 ± 5.79 × 10 2 1.10 × 10 1 ± 3.31 × 10 2
NARMA20noise 2.63 × 10 1 ± 1.43 × 10 1 9.13 × 10 3 ± 1.64 × 10 3 4.50 × 10 3 ± 4.86 × 10 4 3.58 × 10 3 ± 5.00 × 10 4
DatasetModelSensor Shift MSE ± Std Dev@Parameter Value
0.040.050.060.07
Hénonabstract 5.35 × 10 1 ± 1.02 × 10 2 5.37 × 10 1 ± 8.50 × 10 3 5.39 × 10 1 ± 1.24 × 10 2 5.35 × 10 1 ± 5.82 × 10 3
Hénonclassical 2.46 ± 5.26 7.37 × 10 1 ± 2.69 × 10 1 3.91 ± 6.66 1.93 ± 3.51
Hénonnoise 5.35 × 10 1 ± 1.81 × 10 2 5.33 × 10 1 ± 1.14 × 10 2 5.28 × 10 1 ± 1.41 × 10 2 5.31 × 10 1 ± 1.88 × 10 2
NARMA10abstract 1.58 × 10 2 ± 7.38 × 10 4 1.60 × 10 2 ± 7.04 × 10 4 1.56 × 10 2 ± 6.62 × 10 4 1.58 × 10 2 ± 6.64 × 10 4
NARMA10classical 1.55 ± 3.42 4.15 ± 1.05 × 10 1 3.42 ± 9.81 4.44 ± 8.82
NARMA10noise 1.55 × 10 2 ± 7.33 × 10 4 1.48 × 10 2 ± 4.89 × 10 4 1.50 × 10 2 ± 5.83 × 10 4 1.45 × 10 2 ± 3.55 × 10 4
NARMA20abstract 2.84 × 10 3 ± 2.18 × 10 4 2.96 × 10 3 ± 2.84 × 10 4 3.00 × 10 3 ± 2.51 × 10 4 2.78 × 10 3 ± 4.59 × 10 4
NARMA20classical 1.57 × 10 1 ± 8.50 × 10 2 1.06 × 10 1 ± 5.45 × 10 2 1.71 × 10 1 ± 1.05 × 10 1 1.72 × 10 1 ± 1.20 × 10 1
NARMA20noise 2.84 × 10 3 ± 2.25 × 10 4 2.62 × 10 3 ± 1.80 × 10 4 2.46 × 10 3 ± 1.32 × 10 4 2.27 × 10 3 ± 1.47 × 10 4
DatasetModelSensor Shift MSE ± Std Dev@Parameter Value
0.080.090.10
Hénonabstract 5.34 × 10 1 ± 7.57 × 10 3 5.32 × 10 1 ± 5.62 × 10 3 5.34 × 10 1 ± 1.06 × 10 2
Hénonclassical 2.44 ± 3.27 3.76 ± 3.88 2.78 ± 3.70
Hénonnoise 5.30 × 10 1 ± 1.14 × 10 2 5.27 × 10 1 ± 9.84 × 10 3 5.23 × 10 1 ± 7.76 × 10 3
NARMA10abstract 1.58 × 10 2 ± 7.20 × 10 4 1.59 × 10 2 ± 7.41 × 10 4 1.63 × 10 2 ± 5.71 × 10 4
NARMA10classical 9.65 × 10 1 ± 1.37 3.49 ± 9.69 1.41 × 10 1 ± 2.72 × 10 1
NARMA10noise 1.46 × 10 2 ± 3.87 × 10 4 1.44 × 10 2 ± 2.95 × 10 4 1.44 × 10 2 ± 3.67 × 10 4
NARMA20abstract 2.89 × 10 3 ± 2.88 × 10 4 3.33 × 10 3 ± 4.70 × 10 4 3.22 × 10 3 ± 3.43 × 10 4
NARMA20classical 1.22 × 10 1 ± 5.11 × 10 2 1.66 × 10 1 ± 8.10 × 10 2 1.16 × 10 1 ± 5.58 × 10 2
NARMA20noise 2.19 × 10 3 ± 9.98 × 10 5 2.04 × 10 3 ± 5.94 × 10 5 2.04 × 10 3 ± 8.45 × 10 5
Table A4. Average MSE for the experiments simulating tolerances in mass placements over 10 experiment runs. The parameter gives the amplitude of the noise added to the initial mass positions.
Table A4. Average MSE for the experiments simulating tolerances in mass placements over 10 experiment runs. The parameter gives the amplitude of the noise added to the initial mass positions.
DatasetModelMass Displacement MSE ± Std Dev@Parameter Value
0.000.010.020.03
Hénonabstract 5.11 × 10 1 ± 3.10 × 10 4 5.39 × 10 1 ± 1.26 × 10 2 5.41 × 10 1 ± 9.56 × 10 3 5.31 × 10 1 ± 6.81 × 10 3
Hénonclassical 5.88 × 10 1 ± 3.17 × 10 2 5.82 × 10 1 ± 3.08 × 10 2 5.74 × 10 1 ± 2.52 × 10 2 6.07 × 10 1 ± 4.02 × 10 2
Hénonnoise 5.83 × 10 1 ± 2.38 × 10 2 1.01 ± 1.10 6.18 × 10 1 ± 7.47 × 10 2 5.37 × 10 1 ± 1.81 × 10 2
NARMA10abstract 1.38 × 10 2 ± 3.88 × 10 5 1.48 × 10 2 ± 4.17 × 10 4 1.49 × 10 2 ± 2.87 × 10 4 1.56 × 10 2 ± 6.36 × 10 4
NARMA10classical 9.99 × 10 2 ± 4.08 × 10 2 1.06 × 10 1 ± 4.09 × 10 2 9.32 × 10 2 ± 2.04 × 10 2 9.97 × 10 2 ± 3.41 × 10 2
NARMA10noise 2.35 × 10 1 ± 1.19 × 10 1 2.15 × 10 2 ± 2.83 × 10 3 1.66 × 10 2 ± 1.03 × 10 3 1.55 × 10 2 ± 6.84 × 10 4
NARMA20abstract 1.88 × 10 3 ± 3.67 × 10 5 2.94 × 10 3 ± 3.89 × 10 4 2.55 × 10 3 ± 1.75 × 10 4 2.74 × 10 3 ± 5.86 × 10 4
NARMA20classical 1.21 × 10 1 ± 6.16 × 10 2 1.38 × 10 1 ± 6.91 × 10 2 1.36 × 10 1 ± 5.19 × 10 2 1.23 × 10 1 ± 5.56 × 10 2
NARMA20noise 2.63 × 10 1 ± 1.43 × 10 1 8.98 × 10 3 ± 1.97 × 10 3 5.19 × 10 3 ± 5.42 × 10 4 3.52 × 10 3 ± 4.68 × 10 4
0.040.050.060.07
Hénonabstract 5.34 × 10 1 ± 5.45 × 10 3 5.34 × 10 1 ± 1.00 × 10 2 5.35 × 10 1 ± 5.70 × 10 3 5.32 × 10 1 ± 5.11 × 10 3
Hénonclassical 5.86 × 10 1 ± 2.65 × 10 2 5.85 × 10 1 ± 3.60 × 10 2 5.98 × 10 1 ± 3.42 × 10 2 5.93 × 10 1 ± 5.82 × 10 2
Hénonnoise 5.38 × 10 1 ± 2.55 × 10 2 5.26 × 10 1 ± 1.09 × 10 2 5.24 × 10 1 ± 5.41 × 10 3 5.22 × 10 1 ± 8.28 × 10 3
NARMA10abstract 1.59 × 10 2 ± 8.17 × 10 4 1.58 × 10 2 ± 7.70 × 10 4 1.60 × 10 2 ± 7.59 × 10 4 1.55 × 10 2 ± 2.96 × 10 4
NARMA10classical 1.48 × 10 1 ± 4.72 × 10 2 8.14 × 10 2 ± 2.71 × 10 2 1.19 × 10 1 ± 5.17 × 10 2 9.44 × 10 2 ± 2.53 × 10 2
NARMA10noise 1.50 × 10 2 ± 5.30 × 10 4 1.49 × 10 2 ± 7.80 × 10 4 1.44 × 10 2 ± 3.22 × 10 4 1.43 × 10 2 ± 3.03 × 10 4
NARMA20abstract 2.81 × 10 3 ± 3.76 × 10 4 2.87 × 10 3 ± 3.02 × 10 4 2.95 × 10 3 ± 4.10 × 10 4 2.82 × 10 3 ± 3.37 × 10 4
NARMA20classical 1.16 × 10 1 ± 4.46 × 10 2 1.20 × 10 1 ± 6.07 × 10 2 1.31 × 10 1 ± 6.77 × 10 2 1.29 × 10 1 ± 4.15 × 10 2
NARMA20noise 2.75 × 10 3 ± 2.23 × 10 4 2.59 × 10 3 ± 1.90 × 10 4 2.38 × 10 3 ± 1.45 × 10 4 2.26 × 10 3 ± 1.82 × 10 4
DatasetModelMass Displacement MSE ± Std Dev@Parameter Value
0.080.090.10
Hénonabstract 5.28 × 10 1 ± 4.41 × 10 3 5.31 × 10 1 ± 7.63 × 10 3 5.28 × 10 1 ± 8.50 × 10 3
Hénonclassical 5.87 × 10 1 ± 2.03 × 10 2 5.85 × 10 1 ± 3.31 × 10 2 5.57 × 10 1 ± 1.14 × 10 2
Hénonnoise 5.19 × 10 1 ± 2.33 × 10 3 5.21 × 10 1 ± 7.42 × 10 3 5.20 × 10 1 ± 3.08 × 10 3
NARMA10abstract 1.55 × 10 2 ± 5.72 × 10 4 1.65 × 10 2 ± 9.19 × 10 4 1.57 × 10 2 ± 9.65 × 10 4
NARMA10classical 1.19 × 10 1 ± 3.58 × 10 2 1.37 × 10 1 ± 8.37 × 10 2 1.34 × 10 1 ± 7.29 × 10 2
NARMA10noise 1.42 × 10 2 ± 2.14 × 10 4 1.40 × 10 2 ± 4.53 × 10 4 1.39 × 10 2 ± 2.72 × 10 4
NARMA20abstract 2.95 × 10 3 ± 5.17 × 10 4 2.98 × 10 3 ± 3.89 × 10 4 2.99 × 10 3 ± 3.90 × 10 4
NARMA20classical 1.03 × 10 1 ± 3.27 × 10 2 8.87 × 10 2 ± 3.96 × 10 2 1.61 × 10 1 ± 7.64 × 10 2
NARMA20noise 2.09 × 10 3 ± 1.06 × 10 4 2.04 × 10 3 ± 1.03 × 10 4 2.04 × 10 3 ± 7.46 × 10 5

References

  1. Jaeger, H. The “Echo State” Approach to Analysing and Training Recurrent Neural Networks-with an Erratum Note; Technical Report; German National Research Center for Information Technology GMD: Bonn, Germany, 2001; Volume 148. [Google Scholar]
  2. Maass, W.; Markram, H. On the computational power of circuits of spiking neurons. J. Comput. Syst. Sci. 2004, 69, 593–616. [Google Scholar] [CrossRef] [Green Version]
  3. Fernando, C.; Sojakka, S. Pattern Recognition in a Bucket. In Proceedings of the European Conference on Artificial Life, Dortmund, Germany, 14–17 September 2003; Volume 2801, pp. 588–597. [Google Scholar] [CrossRef]
  4. Vandoorne, K.; Dierckx, W.; Schrauwen, B.; Verstraeten, D.; Baets, R.; Bienstman, P.; Campenhout, J. Toward optical signal processing using Photonic Reservoir Computing. Opt. Express 2008, 16, 11182–11192. [Google Scholar] [CrossRef] [PubMed]
  5. Vandoorne, K.; Dambre, J.; Verstraeten, D.; Schrauwen, B.; Bienstman, P. Parallel Reservoir Computing Using Optical Amplifiers. IEEE Trans. Neural Netw. 2011, 22, 1469–1481. [Google Scholar] [CrossRef] [PubMed]
  6. Nakajima, K.; Hauser, H.; Li, T.; Pfeifer, R. Information processing via physical soft body. Sci. Rep. 2015, 5, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Hauser, H.; Füchslin, R.; Nakajima, K. Morphological Computation: The Body as a Computational Resource. In Opinions and Outlooks on Morphological Computation; Hauser, H., Füchslin, R., Pfeifer, R., Eds.; Self-Published: Zürich, Switzerland, 2014; pp. 226–244. [Google Scholar]
  8. Bhovad, P.; Li, S. Physical reservoir computing with origami and its application to robotic crawling. Sci. Rep. 2021, 11, 1–18. [Google Scholar] [CrossRef] [PubMed]
  9. Hauser, H.; Ijspeert, A.J.; Füchslin, R.M.; Pfeifer, R.; Maass, W. Towards a theoretical foundation for morphological computation with compliant bodies. Biol. Cybern. 2011, 105, 355–370. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Cousot, P.; Cousot, R. Abstract interpretation. In Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages—POPL ’77, Paris, France, 15–17 January 1977; ACM Press: New York, NY, USA, 1977. [Google Scholar] [CrossRef] [Green Version]
  11. Singh, G.; Gehr, T.; Püschel, M.; Vechev, M. An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 2019, 3, 1–30. [Google Scholar] [CrossRef] [Green Version]
  12. Singh, G.; Gehr, T.; Püschel, M.; Vechev, M. Boosting Robustness Certification of Neural Networks. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  13. Gehr, T.; Mirman, M.; Drachsler-Cohen, D.; Tsankov, P.; Chaudhuri, S.; Vechev, M. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 21–23 May 2018. [Google Scholar] [CrossRef]
  14. Mirman, M.; Gehr, T.; Vechev, M. Differentiable Abstract Interpretation for Provably Robust Neural Networks. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Stockholm, Sweden, 10–15 July 2018; Dy, J., Krause, A., Eds.; PMLR: Stockholm, Sweden, 2018; Volume 80, pp. 3578–3586. [Google Scholar]
  15. Van Der Hoeven, J. Ball arithmetic. In Proceedings of the Conference Logical Approaches to Barriers in Computing and Complexity, Greifswald, Germany, 17–20 February 2010. [Google Scholar]
  16. Senn, C.W.; Kumazawa, I. Abstract Echo State Networks. In Artificial Neural Networks in Pattern Recognition; Schilling, F.P., Stadelmann, T., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 77–88. [Google Scholar]
  17. Coulombe, J.C.; York, M.C.; Sylvestre, J. Computing with networks of nonlinear mechanical oscillators. PLoS ONE 2017, 12, e0178663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Provot, X. Deformation Constraints in a Mass-Spring Model to Describe Rigid Cloth Behaviour. In Proceedings of the Graphics Interface ’95, Québec, QC, Canada, 17–19 May 1995; Canadian Human-Computer Communications Society: Toronto, ON, Canada, 1995; pp. 147–154. [Google Scholar]
  19. Urbain, G.; Degrave, J.; Carette, B.; Dambre, J.; Wyffels, F. Morphological Properties of Mass–Spring Networks for Optimal Locomotion Learning. Front. Neurorobot. 2017, 11, 16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Murai, A.; Hong, Q.Y.; Yamane, K.; Hodgins, J.K. Dynamic skin deformation simulation using musculoskeletal model and soft tissue dynamics. Comput. Vis. Media 2017, 3, 49–60. [Google Scholar] [CrossRef] [Green Version]
  21. Johansson, F. Ball Arithmetic as a Tool in Computer Algebra. In Maple in Mathematics Education and Research; Gerhard, J., Kotsireas, I., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 334–336. [Google Scholar]
  22. Fieker, C.; Hart, W.; Hofmann, T.; Johansson, F. Nemo/Hecke: Computer Algebra and Number Theory Packages for the Julia Programming Language. In Proceedings of the 2017 ACM on International Symposium on Symbolic and Algebraic Computation (ISSAC ’17), Kaiserslautern, Germany, 25–28 July 2017; ACM: New York, NY, USA, 2017; pp. 157–164. [Google Scholar] [CrossRef] [Green Version]
  23. O’Donoghue, B.; Chu, E.; Parikh, N.; Boyd, S. Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding. J. Optim. Theory Appl. 2016, 169, 1042–1068. [Google Scholar] [CrossRef] [Green Version]
  24. Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A fresh approach to numerical computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef] [Green Version]
  25. Goudarzi, A.; Banda, P.; Lakin, M.R.; Teuscher, C.; Stefanovic, D. A Comparative Study of Reservoir Computing for Temporal Signal Processing. arXiv 2014, arXiv:1401.2224. [Google Scholar]
  26. Hénon, M. A Two-Dimensional Mapping with a Strange Attractor; Springer: New York, NY, USA, 1976; Volume 50, pp. 69–77. [Google Scholar] [CrossRef]
Figure 1. Visualisation of the abstraction of a single data-point to a set of points, representing a hypercube. The solid point represents the concrete point, and the dotted ones represent points that are now also considered using the abstraction.
Figure 1. Visualisation of the abstraction of a single data-point to a set of points, representing a hypercube. The solid point represents the concrete point, and the dotted ones represent points that are now also considered using the abstraction.
Ai 03 00012 g001
Figure 4. Visualisation of tolerances that can occur when physically building a mass-spring network. The light blue circle represents the exact location of the mass and the red rectangle represents the area of possible locations due to tolerances in horizontal and vertical directions. Such tolerances can change the dynamical behaviour of the network compared with simulated the counterpart, and poses a major obstacle when trying to transfer learnt parameters from simulation to real-world systems.
Figure 4. Visualisation of tolerances that can occur when physically building a mass-spring network. The light blue circle represents the exact location of the mass and the red rectangle represents the area of possible locations due to tolerances in horizontal and vertical directions. Such tolerances can change the dynamical behaviour of the network compared with simulated the counterpart, and poses a major obstacle when trying to transfer learnt parameters from simulation to real-world systems.
Ai 03 00012 g004
Figure 5. Hénon time-series as used in the experiments.
Figure 5. Hénon time-series as used in the experiments.
Ai 03 00012 g005
Figure 6. NARMA10 time-series as used in the experiments.
Figure 6. NARMA10 time-series as used in the experiments.
Ai 03 00012 g006
Figure 7. NARMA20 time-series as used in the experiments.
Figure 7. NARMA20 time-series as used in the experiments.
Ai 03 00012 g007
Figure 8. The average MSE with standard deviations for simulated sensor failures for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Figure 8. The average MSE with standard deviations for simulated sensor failures for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Ai 03 00012 g008
Figure 9. The average MSE with standard deviations for simulated sensor noise for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Figure 9. The average MSE with standard deviations for simulated sensor noise for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Ai 03 00012 g009
Figure 10. The average MSE with standard deviations for fixed simulated sensor reading displacements for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Figure 10. The average MSE with standard deviations for fixed simulated sensor reading displacements for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Ai 03 00012 g010
Figure 11. The average MSE with standard deviations for simulated initial mass displacements for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Figure 11. The average MSE with standard deviations for simulated initial mass displacements for the datasets (a) Hénon, (b) NARMA10 and (c) NARMA20.
Ai 03 00012 g011
Table 1. The parameter ranges used for each type of simulated error.
Table 1. The parameter ranges used for each type of simulated error.
AugmentationParameterRange
Sensor Failurep [ 0 ,   0.1 ]
Sensor Noise σ [ 0 ,   0.1 ]
Fixed Sensor Displacementz [ 0 ,   0.1 ]
Mass Position Displacementk [ 0 ,   0.1 ]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Senn, C.W.; Kumazawa, I. Abstract Reservoir Computing. AI 2022, 3, 194-210. https://doi.org/10.3390/ai3010012

AMA Style

Senn CW, Kumazawa I. Abstract Reservoir Computing. AI. 2022; 3(1):194-210. https://doi.org/10.3390/ai3010012

Chicago/Turabian Style

Senn, Christoph Walter, and Itsuo Kumazawa. 2022. "Abstract Reservoir Computing" AI 3, no. 1: 194-210. https://doi.org/10.3390/ai3010012

Article Metrics

Back to TopTop