Validating ENSO cyclostationary deterministic behavior

I tend to write a more thorough analysis of research results, but this one is too interesting not to archive in real-time.

First, recall that the behavior of ENSO is a cyclostationary yet metastable standing-wave process, that is forced primarily by angular momentum changes. That describes essentially the physics of liquid sloshing. Setting input forcings to the periods corresponding to the known angular momentum changes from the Chandler wobble and the long-period lunisolar cycles, it appears trivial to capture the seeming quasi-periodic nature of ENSO effectively.

The key to this is identifying the strictly biennial yet metastable modulation that underlies the forcing. The biennial factor arises from the period doubling of the seasonal cycle, and since the biennial alignment (even versus odd years)  is arbitrary, the process is by nature metastable (not ergodic in the strictest sense).  By identifying where a biennial phase reversal occurs, the truly cyclostationary arguments can be isolated.

The results below demonstrate multiple regression training on 30 year intervals, applying only known factors of the Chandler and lunisolar forcing (no filtering applied to the ENSO data, an average of NINO3.4 and SOI indices). The 30-year interval slides across the 1880-2013 time series in 10-year steps, while the out-of-band  fit maintains a significant amount of coherence with the data:

Fig 1: A sliding 30 year training interval was applied to varying starting points in the ENSO index time series.

This is a remarkable result considering that 30 years of data for training is barely enough to capture the chaotic 4-to-7 year periodicities that ENSO is typically characterized by. Yet even within that small interval, the multiple regression fit of the handful of forcing factors is likely capturing the inherent cyclostationary aspect of the ENSO process.

Below is the fit over the entire interval, showing in yellow those regions that cause the fit to degrade from a good correlation. Those highlighted parts will likely cause a ceiling to how well a model can fit the data.

Fig. 2: Complete fit over the entire interval. Regions in yellow (~1893, 1920, 1935, 1992) are likely candidates for further investigation, either because of the greater noise levels in measurements taken before 1935, other data collection artifacts, or possible overriding climate point-in-time events such as volcanic activity.

Figure 3 shows the sensitivity to the parameter period. The cyclic values were predetermined but tweaking them slightly about the selected value degraded the fit, which indicates that they are likely relevant. The broadest, at 14.6 years, is related to an additional triaxial Earth wobble term.

Fig 3: The main wobble and lunar periods were preselected, and slight changes to their value degraded the fit

The extrapolated fit is quantitatively worse for earlier start years, as the correlation coefficient decreases as shown in Figure 4. The dips are partly explained by the highlighted regions of Figure 3.  The multiple regression was thus over-fitting the errors, leading to a poor correlation outside that region.

Fig 4: Correlation coefficient validation

Figure 5 below shows the variation in the amplitude of the sinusoidal factors. The minor 7 month = 0.5748 year cycle is predicted but not considered strong.

Fig 5:  Variation in the fitting factors.  Each of the fitting factors is consistently strong except for the 0.5748 year wobble.  The earlier years appear more prone to over-fitting as in Fig 4.

Multiple regression validation of data outside of the training intervals is necessary for seemingly noisy and/or chaotic process.  Here is an example of a fit to a random walk generated via an Ornstein-Uhlenbeck red noise process. The fit appears good within the interval but it fails completely outside the interval. The only thing stationary about a pure red noise process is that the statistical measures such as variance remain constant over time.  Since red noise is largely a memory-less process, any modeled waveform will lose coherence with the data eventually.

Fig 6: Fitting to Red Noise.  An apparently good fit within the training interval hides the fact that the model loses coherence completely with the data outside that interval. Over the entire range, the correlation coefficient in only 0.06, which is nowhere near significant.

These charts show that ENSO is governed by a handful of known geophysical cyclic forcing factors, leading to the suggestion that it is cyclostationary deterministic and therefore conducive to forecasting.

Is it possible that we now have simple models for QBO, Chandler Wobble, and an even more concise ENSO?  A hat trick?

 

5 thoughts on “Validating ENSO cyclostationary deterministic behavior

  1. Kevin, I have already done a forward looking forecast and the model nailed the current El Nino peak, placing it at the start of 2016. That was with training ENSO data to October 2013.

    Yet, in the greater scheme of things, it doesnt matter because it is just one point, and the model predicts the ENSO polarity right about 80% of the time. So you would need more agreements to draw anything significant from the current El Nino.

  2. Pingback: Biennial Mode of SST and ENSO | context/Earth

    • Thanks,

      As a hobby I like to mix mashups of instrumental music, and I just finished one featuring Peter Green-era Fleetwood Mac, so it is quite an honor to have you visit this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Optionally add an image (JPEG only)