Lambert, Sébastien B., Steven L. Marcus, and Olivier de Viron. "Atmospheric Torques and Earth’s Rotation: What Drove the Millisecond-Level Length-of-Day Response to the 2015-16 El Niño?."
Remember the concern over the QBO anomaly/disruption during 2016?
Quite a few papers were written on the topic
- Newman, P. A., et al. "The anomalous change in the QBO in 2015–2016." Geophysical Research Letters 43.16 (2016): 8791-8797.
Newman, P. A., et al. "The Anomalous Change in the QBO in 2015-16." AGU Fall Meeting Abstracts. 2016.
Randel, W. J., and M. Park. "Anomalous QBO Behavior in 2016 Observed in Tropical Stratospheric Temperatures and Ozone." AGU Fall Meeting Abstracts. 2016.
Dunkerton, Timothy J. "The quasi‐biennial oscillation of 2015–2016: Hiccup or death spiral?." Geophysical Research Letters 43.19 (2016).
Tweedy, O., et al. "Analysis of Trace Gases Response on the Anomalous Change in the QBO in 2015-2016." AGU Fall Meeting Abstracts. 2016.
Osprey, Scott M., et al. "An unexpected disruption of the atmospheric quasi-biennial oscillation." Science 353.6306 (2016): 1424-1427.
Note that the training region for the model is highlighted in YELLOW and is in the interval from 1978 to 1990. This was well in the past, yet it was able to pinpoint the sharp peak 27 years later.
The disruption in 2015-2016 shown with shaded black may have been a temporary forcing stimulus. You can see that it obviously flipped the polarity with respect to the model. This will provoke a transient response in the DiffEq solution, which will then eventually die off.
The bottom-line is that the climate scientists who pointed out the anomaly were correct in that it was indeed a disruption, but this wasn't necessarily because they understood why it occurred — but only that it didn't fit a past pattern. It was good observational science, and so the papers were appropriate for publishing. However, if you look at the QBO model against the data, you will see many similar temporary disruptions in the historical record. So it was definitely not some cataclysmic event as some had suggested. I think most scientists took a less hysterical view and simply pointed out the reversal in stratospheric winds was unusual.
I like to use this next figure as an example of how this may occur (found in the comment from last year). A local hurricane will temporarily impact the tidal displacement via a sea swell. You can see that in the middle of the trace below. On both sides of this spike, the tidal model is still in phase and so the stimulus is indeed transient while the underlying forcing remains invariant. For QBO, instead of a hurricane, the disruption could be caused by a SSW event. It also could be an unaccounted-for lunar forcing pulse not captured in the model. That's probably worth more research.
As the QBO is still on a 28 month alignment, that means that the external stimulus — as with ENSO, likely the lunar tidal force — is providing the boundary condition synchronization.
The model for ENSO includes a nonlinear search feature that finds the best-fit tidal forcing parameters. This is similar to what a conventional ocean tidal analysis program performs — finding the best-fitting lunar tidal parameters based on a measured historic interval of hundreds of cycles. Since tidal cycles are abundant — occurring at least once per day — it doesn't take much data collected over a course of time to do an analysis. In contrast, the ENSO model cycles over the course of years, so we have to use as much data as we can, yet still allow test intervals.
What follows is the recipe (more involved than the short recipe) that will guarantee a deterministic best-fit from a clean slate each time. Very little initial condition information is needed to start with, so that the final result can be confidently recovered each time, independent of training interval.
Contrasting to the well-known Butterfly Effect, there is another scientific modeling limitation known as the Hawkmoth Effect. Instead of simulation results being sensitive to initial conditions, which is the Butterfly Effect, the Hawkmoth Effect is sensitive to model structure. It's a more subtle argument for explaining why climate behavioral modeling is difficult to get right, and named after the hawkmoth because hawkmoths are "better camouflaged and less photogenic than butterflies".
Not everyone agrees that this is a real effect, or it just reveals shortcomings in correctly being able to model the behavior under study. So, if you have the wrong model or wrong parameters for the model, of course it may diverge from the data rather sharply.
In the context of the ENSO model, we already provided parameters for two orthogonal intervals of the data. Since there is some noise in the ENSO data — perfectly illustrated by the fact that SOI and NINO34 only have a correlation coefficient of 0.79 — it is difficult to determine how much of the parameter differences are due to over-fitting of that noise.
In the figure below, the middle panel shows the difference between the SOI and NINO34 data, with yellow showing where the main discrepancies or uncertainties in the true ENSO value lie. Above and below are the model fits for the earlier (1880-1950 shaded in a yellow background) and later (1950-2016) training intervals. In certain cases, a poorer model fit may be able to be ascribed to uncertainty in the ENSO measurement, such as near ~1909., ~1932, and ~1948, where the dotted red lines align with trained and/or tested model regions. The question mark at 1985 is a curiosity, as the SOI remains neutral, while the model fits to more La Nina conditions of NINO34.
There is certainly nothing related to the Butterfly Effect in any of this, since the ENSO model is not forced by initial conditions, but by the guiding influence of the lunisolar cycles. So we are left to determine how much of the slight divergence we see is due to non-stationary variation of the model parameters over time, or whether it is due to missing some other vital structural model parameters. In other words, the Hawkmoth Effect is our only concern.
In the model shown below, we employ significant over-fitting of the model parameters. The ENSO model only has two forcing parameters — the Draconic (D) and Anomalistic (A) lunar periods, but like in conventional ocean tidal analysis, to make accurate predictions many more of the nonlinear harmonics need to be considered [see Footnote 1]. So we start with A and D, and then create all combinations up to order 5, resulting in the set [ A, D, AD, A2, D2, A2D, AD2, A3, D3, A2D2, A3D, AD3, A4, D4, A2D3, A3D2, A4D1, A1D4, A5, D5 ].
This looks like it has the potential for all the negative consequence of massive over-fitting, such as fast divergence in amplitude outside the training interval, yet the results don't show this at all. Harmonics in general will not cause a divergence, because they remain in phase with the fundamental frequencies both inside and outside the training interval. Besides that, the higher order harmonics start having a diminished impact, so this set is apparently about right to create an excellent correlation outside the training interval. The two other important constraints in the fit, are (1) the characteristic frequency modulation of the anomalistic period due to the synodic period (shown in the middle left inset) and (2) the calibrated lunar forcing based on LOD measurements (shown in the lower panel).
The resulting correlation of model to data is 0.75 inside the training interval (1880-1980) and 0.69 in the test interval (1980-2016). So this gets close to the best agreement we can expect given that SOI and NINO34 only reaches 0.79. Read this post for the structural model parameter variations for a reduced harmonic set to order 3 only.
Welcome to the stage of ENSO analysis where getting the rest of the details correct will provide only marginal benefits; yet these are still important, since as with tidal analysis and eclipse models, the details are important for fine-tuning predictions.
- For conventional tidal analysis, hundreds of resulting terms are the norm, so that commercial tidal prediction programs allow an unlimited number of components.
Full recipe for #ENSO. Seasonal phase lock. Biennial modulation. 1-year delay differential. Two lunar periods, Draconic(nodal) & Anomalistic
— Paul Pukite (@WHUT) August 23, 2017
and for QBO
Full recipe for #QBO. Take acceleration of data, via first derivative of velocity. Biannual phase lock. Draconic (nodal) lunar forcing.
— Paul Pukite (@WHUT) August 23, 2017
An example of a prediction:
How does anyone know which way the ENSO behavior is heading if there is not a clear understanding of the underlying mechanism? 
For the prediction quoted above, the closer one gets to an peak or valley, the safer it is to make a dead reckoning guess. For example, I can say a low tide is coming if it is coming off a high tide — even if I have no idea what causes tides.
Yet, if we understand the mechanism behind ocean tides — that it is due to the gravitational pull of the sun and the moon — we can do a much better job of prediction.
The New York Times climate change reporter Justin Gillis suggests that climate science can make predictions as well as geophysicists can predict eclipses:
https://www.nytimes.com/2017/08/18/climate/should-you-trust-climate-science-maybe-the-eclipse-is-a-clue.html. And there is this:
Yet, if climate scientists can't figure out the mechanism behind a behavior such as ENSO, everyone is essentially in the same boat, fishing for a basic understanding.
So what happens if we can formulate the messy ENSO behavior into a basic geophysics problem, something on the complexity of tides? We are nowhere near that according to the current research literature, unless this finding — which has been a frequent topic here — turns out to be true.
In this case, the recent solar eclipse is in fact a clue. The precise orbit of the moon is vital to determining the cycles of ENSO. If this assertion is true, one day we will likely be able to predict when the next El Nino occurs, with the accuracy of predicting the next eclipse.
— Paul Pukite (@WHUT) August 17, 2017
Applying the ENSO model to predict El Nino and La Nina events is automatic. There are no adjustable parameters apart from the calibrated tidal forcing amplitudes and phases used in the process of fitting over the training interval. Therefore the cross-validated interval from 1950 to present is untainted during the fitting process and so can be used as a completely independent and unbiased test.
I am interested in variations of the Navier–Stokes equations that describe hydrodynamical flow on the surface of a sphere. The premise is that such a formulation can be used to perhaps model ENSO and QBO.
The so-called primitive equations are the starting point, as these create constraints for the volume geometry (i.e. vertical motion much smaller than horizontal motion and fluid layer depth small compared to Earth's radius). From that, we go to Laplace's tidal equations, which are a linearization of the primitive equations.
Of course the equations are under-determined, so the only hope I had of solving them is to provide this simplifying assumption:
If you don't believe that this partial differential coupling of a latitudinal forcing to a tidal response occurs, then don't go further. But if you do, then:
Pierre-Simon Laplace was one of the first mathematicians who took an interest in problems of probability and determinism. It's surprising how much of the math and applied physics that Laplace developed gets used in day-to-day analysis. For example, while working on the ENSO and QBO analysis, I have invoked the following topics at some point:
- Laplace's tidal equations
- Laplace's equation
- Laplacian differential operator
- Laplace transform
- Difference equation
- Planetary and lunar orbital perturbations
- Probability methods and problems
- Inductive probability
- Bayesian analysis, e.g. the Sunrise problem
- Statistical methods and applications
- Central limit theorem
- Least squares
- Filling in holes of Newton's differential calculus
- Others here
Apparently he did so much and was so comprehensive that in some of his longer treatises he often didn't cite the work of others, making it difficult to pin down everything he was responsible for (evidently he did have character flaws).
In any case, I recall applying each of the above in working out some aspect of a problem. Missing was that Laplace didn't invent Fourier analysis but the Laplace transform is close in approach and utility.
When Laplace did all this research, he must have possessed insight into what constituted deterministic processes:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.— Pierre Simon Laplace,
A Philosophical Essay on Probabilities[wikipedia]
He also seemed to be a very applied mathematician, as per a quote I have used before “Probability theory is nothing but common sense reduced to calculation.” Really nothing the least bit esoteric about any of Laplace's math, as it seemed always motivated by solving some physics problem or scientific observation. It appears that he wanted to explain all these astronomic and tidal problems in as simple a form as possible. Back then it may have been esoteric, but not today as his techniques have become part of the essential engineering toolbox. I have to wonder if Laplace were alive now whether he would agree that geophysical processes such as ENSO and QBO were equally as deterministic as the sun rising every morning or of the steady cyclic nature of the planetary and lunar orbits. And it wasn't as if Laplace possessed confirmation bias that behaviors were immediately deterministic; as otherwise he wouldn't have spent so much effort in devising the rules of probability and statistics that are still in use today, such as the central limit theorem and least squares.
Perhaps he would have glanced at the ENSO problem for a few moments, noticed that in no way that it was random, and then casually remarked with one his frequent idiomatic phrases:
"Il est aisé à voir que..." ... or .. ("It is easy to see that...").
It may have been so obvious that it wasn't important to give the details at the moment, only to fill in the chain of reasoning later. Much like the contextEarth model for QBO, deriving from Laplace's tidal equations.
Where are the Laplace's of today that are willing to push the basic math and physics of climate variability as far as it will take them? It has seemingly jumped from Laplace to Lorenz and then to chaotic uncertainty ala Tsonis or mystifying complexity ala Lindzen. Probably can do much better than to punt like that ... on first down even !
Someone long ago must have stated that the El Nino/Southern Oscillation (ENSO) phenomenon was not related to lunisolar (lunar+solar) tidal forcing. This negative result (or null result) is not documented anywhere (AFAICT) but is likely considered conventional wisdom by climate scientists. The most direct evidence that climate scientists don't consider lunisolar forcing is that it appears nowhere in the parameterization of general circulation model (GCM) source code.
As a general rule, negative findings are rarely reported in research journals:
"As it stands now, researchers are typically rewarded (tenure, grants, better jobs, etc.) for publishing a quantity of publications in prestigious journals. They do this by
- Running small and statistically weak studies (they are easy to do) that produce only positive results, since journals tend to not publish negative findings.
- Ignoring negative findings.
- Publishing only new and exciting findings that journals are looking for.
- Never checking old findings for accuracy and replicability.
- Changing methodologies in mid-stream to assure positive results."
I imagine that if a budding graduate student devised a hypothetical ENSO/lunar tidal connection as a potential thesis topic, it would be rejected by his advisor. The advisor would not want to risk his reputation or track record by going down a potential dead end. The same is perhaps true of the recent case of NASA JPL rejecting the proposal of one of their research teams who suggested funding for this actual topic. Read an excerpt from this footnote:
"None of the peer-reviewers nor collaborators in 2006 had anticipated that the most remarkable large-scale process that we were going to find comes from ocean circulations fueled by Luni-Geo-Solar gravitational energy. We found evidence of the existence of this energy in the data produced by satellites like QuikSCAT and ASCAT. Following the standardfrom the 1970’s of using these satellite data as winds in numerical modeling of oceans and climate has created and continues to create significant errors in the simulated ocean temperature, salinity, and currents as well as in the atmosphere. Together with our co-workers, we chose not to publish the errors until a solution to appropriately use
satellite data in numerical modeling was found. However, over the following years, proposed solutions were not considered because of various factors including economic and scientific pressure to publish and continue the standard agenda."
This is a clear example of confirmation bias stalling promising research. Yet, apparently there are no issues with pushing iffy models of ENSO based on nebulous chaos theory by climate change deniers such as Anastasios Tsonis.
Hmmm ... something is not right with this picture.
So if this lunisolar model of ENSO pans out, it is an excellent example of how confirmation bias impeded scientific progress, but with the scientific method eventually winning out.
And we can do the same confirmation bias exercise with the quasibiennial oscillation (QBO) phenomenon, substituting the climate change denier Richard Lindzen for Tsonis as the impediment to progress. Lindzen couldn't find the lunar connection (even though there is plenty of evidence he tried), so just assumed it wasn't there. Everyone that followed Lindzen's original model essentially confirmed his bias and so no progress was made, until the bias was removed and the lunisolar forcing re-evaluated.
The difference here is that I am not preparing a thesis or working for NASA. This is one way of inoculating oneself from historical confirmation biases -- by not being part of an inside consensus, there is no one to suggest to "not go there". By the same token, I now possess an apparent confirmation bias that a lunisolar forcing plays a primary role in certain climate phenomena. Yet, it's a weak confirmation bias because I didn't start with this view, but it gathered steam based on all the evidence accrued over the past few years. It is now up to others to use the scientific method to reject this model. And, of course, I will be the first to abandon this model if I come across strong evidence to reject it. After all, I don't have any particular allegiance to the moon gods, only in the learned view that oscillations of this nature do not occur via spontaneous resonance.
As an important footnote to this post, consider the recent admission that lunar forces play a significant role in triggering earthquakes. Up to the last year, the confirmation bias was that the lunar gravitational forcing was too weak to trigger earthquakes, and so the onset was historically described in statistical terms. The earthquake itself triggered by the passage and time and the slow creep of a fault. But the tide turned in 2016 when two independent groups found significant correlations with lunar cycles -- a Japanese group led by Ide  and a US Geological Survey group led by van der Elst . These are the same fortnightly lunar cycles (see Figure 2 below) that are used in the ENSO model described above (compare to lower chart in Figure 1). So the new thinking is that indeed the gravitational pull of the moon will trigger the slipping of a fault, and this happens enough that future predictions of earthquakes (for example along the San Andreas fault ) can use tidal tables to aid the analysis.The bottom-line is that we need to monitor the earth sciences consensus regarding lunar forcing in the next few years, both in terms of ENSO and QBO climate behavior and with regard to earthquake analysis. Scientific theories are not binding, unlike sporting events -- "World cup matches cannot be replayed, but science can be corrected afterwards.". Thus, the confirmation bias of "no lunar forcing" is not necessarily set in stone.
Ide, Satoshi, Suguru Yabe, and Yoshiyuki Tanaka. "Earthquake potential revealed by tidal influence on earthquake size-frequency statistics." Nature Geoscience 9.11 (2016): 834-837.
van der Elst, Nicholas J., et al. "Fortnightly modulation of San Andreas tremor and low-frequency earthquakes." Proceedings of the National Academy of Sciences (2016): 201524316.
Delorey, Andrew A., Nicholas J. van der Elst, and Paul A. Johnson. "Tidal triggering of earthquakes suggests poroelastic behavior on the San Andreas Fault." Earth and Planetary Science Letters 460 (2017): 164-170.