ENSO Transformation

After playing around with the best way to map a model to ENSO, I am coming to the conclusion that the wave equation transformation is a good approach.

Fig. 1:  Upper is the wave transformation fit to SOI, lower is the difference error signal, very uniform and flat.

What the transformation amounts to is a plot of the LHS vs the RHS of the SOI differential wave equation, where k(t) (the Mathieu or Hill factor) is estimated corresponding to a characteristic frequency of ~1/(4 years),

 SOI''(t)+k(t) SOI(t) = F(t)

and where F(t) is the forcing function, comprised of the main suspects of QBO, Chandler wobble, TSI, and the long term tidal beat frequencies.

I added a forcing break at 1980, likely due to TSI, to get the full 1880 to 2013 fit.

The wavelet scalogram agreement is very impressive, indicating that all frequency scales are being accounted for.

Fig 2: Wavelet scalogram, scale in months since 1880.

This is essentially just a different way of looking at the model described in the ARXIV ENSO sloshing paper. What one can do is extend the forcing and then do an integration of the LHS to project the ENSO waveform into the future.

 

 

The Hidden Harmony of ENSO

With this analysis, I wanted to demonstrate the underlying order of the most concise SOI Model. This model characterizes the salient fit parameters:

  1. Two slightly offset forcing sinusoids which match the average QBO forcing cycle
  2. A forcing sinusoid that maps to the frequency of the Chandler wobble beat
  3. A Mathieu modulation perturbing the 2nd-order DiffEq with a periodicity of about 8 years

This set of four parameters was used to model both modern day records corresponding to the atmospheric pressure data describing the Southern Oscillation Index, as well as to proxy records of historical coral data. The parameters seem to match closely over widely separated time intervals (see Figure 5 in the latter link).

Figure 1 is the modern-day SOI record, suitably filtered to show the multi-year excursions.

Fig 1: SOI Data. The waveform is erratic, to say the least.

It is amazing that this erratic a waveform can be modeled by a limited set of parameters that actually make some physical sense, but that is nature for you and the idea behind "sloppy modeling" -- models that use just a few parameters to accurately describe a behavior. The simplified model strongly suggests that there is a hidden harmony acting to drive ENSO.

Continue reading

Lithium Battery Characterization

In the menu under Stochastic Analysis, I have a white paper called "Characterization of Charge and Discharge Regimes in Lithium-Ion Batteries".

This is a breakthrough on modeling the fat-tail behavior of Lithium-Ion batteries and something that has a lot of practical analysis benefits considering the push toward common-place use of Li+ technology (see Tesla's Powerwall and review).

From the introduction:

"Modeling with uncertainty quantification has application to such phenomena as oxidation, corrosion, thermal response, and particulate growth. These fall into the classes of phenomena governed substantially by diffusional processes. At its most fundamental, diffusion is a model of a random walk. Without a strong convection or advection term to guide the process (e.g. provided by an electric or gravitational field), the kinetic mechanism of a particle generates a random trajectory that is well understood based on statistical physics principles. The standard physics approach is to solve a master diffusion equation under transient conditions. This turns into a kernel solution that we can apply to an arbitrary forcing function, such as provided by an input material flux or thermal impulse. In the case of a rechargeable battery, such as Li+, the flux is charged ions under the influence of an electric field."

Alas, when I tried to submit the paper to ARXIV as a preprint it got rejected. The first time it was rejected due to a mixup in the citation numbering. The second time they said it was removed from the publication queue without exactly saying why, suggesting it be submitted to a "conventional journal" instead.

I do not need that kind of hassle. I can just as easily DIY.

Characterizing Changes in the Angular Momentum of the Earth

Sloshing of the ocean's waters, as exemplified by ENSO, can only be generated by a suitable forcing. Nothing will spontaneously slosh back and forth unless it gets the right excitation.  Some might hold a naive picture that simply the rotation of the earth can cause the sloshing, but we have to remember that this is a centrifugal force which is evenly directed downward with no changes over time. However, this last part -- "no changes over time" -- is only correct to zero-th order.  Two classes of mechanisms can disrupt this constant rotation rate.  First, the Chandler wobble of the earth's axis causes a continuously changing angular momentum of a point of reference. That is a general wobble mechanism similar to the spinning of a top. The second class is of nonspecific events that can either speed up or slow down the rotation rate of the earth.  Note that the wobble may be a behavior that actually belongs to this class, as it may be hard to distinguish the specific mechanisms behind the change in rate. Both of these mechanisms have been measured and they both indicate a clear periodic signal of approximately 6 years.   The Chandler wobble was described in a previous SOI modeling post and it shows a strong average period of 6.45 years see Figure 1 below.

Fig 1: The Chandler Wobble leads to a continuous change of angular momentum of a point of reference, in this case the north pole. A phase shift occurs between 1920 and 1930, but otherwise the period is relatively fixed at 6.45 years, which is the beat frequency of the 433 day wobble and the calendar year.

Continue reading

Climate Science is just not that hard

From Andy Lacis, this is motivation to keep on looking at what Isaac Held calls the "fruit-fly" models of climate science.

Rabbet Run blog -- More from Andy Lacis

“It would seem more appropriate to assign "wickedness" to problems that are more specifically related to witches. The climate problem, while clearly complex and complicated, is not incomprehensible. Current climate models do a very credible job in simulating current climate variability and seasonal changes. Present-day weather models make credible weather forecasts – and there is a close relationship. Most of the cutting edge current climate modeling research is aimed at understanding the physics of ocean circulation and the natural variability of the climate system that this generates. While this may be the principal source of uncertainty in predicting regional climate change and weather extreme events, this uncertainty in modeling the climate system’s natural variability is clearly separate and unrelated to the radiative energy balance physics that characterize the global warming problem. The appropriate uncertainty that exists in one area of climate modeling doe not automatically translate to all other components of the climate system.”

I am continuing to work on the ENSO model, and as it gets simpler, the fit improves.  I started using an ENSO model that combines the SOI metric at 2/3 (Tahiti and Darwin) with the (negative to match SOI) NINO34 index at 1/3 (another interesting variation to consider is a median filter picking the middle value) This is a fit with a correlation coefficient of >0.80.

Fig 1: Recent SOIM fit. Yellow indicates regions of "poorer" fit

Another metric to consider is a variation of a binary classifier. The idea is simple. Since the model and data both show an oscillation about zero, then just by counting the number of times that both data and model agree with respect to positive or negative excursions, one can conveniently estimate fitness.  As it just so happens an 80% agreement in excursion classification also corresponds to around a 0.80 correlation coefficient.

Fig 2: Binary classifier for estimating correlation of model.

There is a limit to how high the CC can go, since the correlation between SOI and NINO34 tops off at about 0.86. This really has to do with nuisance noise in the system and the inability to identify the true oceanic dipole with confidence. So, for all practical purposes, a CC of 0.80 or identifying + or - excursions at 80% accuracy is quite good, with diminishing returns after that (since the fit is to the noise).

The other metric that I am exploring is an estimate of the agreement between two wavelet scalograms.   As one can see below, the fit in 2 dimensions appears quite good and the extra degrees of freedom provide better discrimination in identifying the better fit between two models.

Fig 3: Wavelet scalograms.

I am asking the participants at the Azimuth Forum as to how best to create an effective CC for comparing wavelet scalograms, but have had no response so far.


BTW, Archived discussions of the ENSO Revisited thread at the Azimuth Forum

http://web.archive.org/web/20150426142611/https://forum.azimuthproject.org/discussion/1608/enso-revisit/p1

http://web.archive.org/web/20150426142718/https://forum.azimuthproject.org/discussion/1608/enso-revisit/p2

Also a battle I had over at Real Climate  with a neo-denier. The moderators at RC dispatched the fella to the Bore Hole thread.

The Oil Shock Model Simplified

DC at the OilPeakClimate blog spent some time at re-analysis of the Oil Shock model and Dispersive Discovery model which were originally described in The Oil Conundrum Book (see menu above).

Whenever a model is re-analyzed, there is the possibility of improvement. In DC's case, he made a very smart move to try to separate the extra-heavy oil as a distinct process. The Shell Oil discovery data appears to combine the two sets of data, leading to a much larger URR than Laherrere gets. What he accomplished is to reconcile the lighter-crude Laherrere discovery data and the reality that there are likely ~500 GB of extra-heavy crude waiting to be exploited. Whether this effectively happens is the big question.

Read DC's whole post and potential discussion here, as he has made an excellent effort the last several years of trying to digest some heavy math and dry reading from the book.   He is also making sense of the Bakken oil production numbers in other posts.

As a PS, I have added an extra section in the book to describe the dispersive diffusion model describing the Bakken production numbers.


DC also posted this piece on the PeakOilBarrel blog. Almost 500 comments there.

CSALT re-analysis

I have previously described a basic thermodynamic model for global average temperature that I call CSALT. It consists of 5 primary factors that account for trend and natural variability. The acronym C S A L T spells out each of these factors.

C obviously stands for excess atmospheric CO2 and the trend is factored as logarithm(CO2).

S refers to the ENSO SOI index which is known to provide a significant fraction of temperature variability, without adding anything by way of a long-term trend. Both the SOI and closely correlated Atmospheric Angular Momentum (AAM) have the property that they revert to a long-term mean value.

A refers to aerosols and specifically the volcanic aerosols that cause sporadic cooling dips in the global temperature measure. Just a handful of volcanic eruptions of VEI scale of 5 and above are able to account for the major cooling noise spikes of the last century.

L refers to the length-of-day (LOD) variability that the earth experiences. Not well known, but this anomaly closely correlates to variability in temperature caused by geophysical processes. Because of conservation of energy, changes in kinetic rotational energy are balanced by changes in thermal energy, via waxing and waning of the long-term frictional processes. This is probably the weakest link in terms of fundamental understanding, with  the nascent research findings mainly coming out of NASA JPL (and check this out too).

T refers to variations in Total Solar Irradiance (TSI) due mainly to sunspot fluctuations. This is smaller in comparison to the others and around the level predicted from basic insolation physics.

Taken together, these factors account for a correlation coefficient of well over 90% for a global temperature series such as GISTEMP.

Over time I have experimented with other factors such as tidal periodicities and other oceanic dipoles such as North Atlantic Oscillation, but these are minor in comparison to the main CSALT factors. They also add more degrees of freedom, so they can also lead one astray in terms of spurious correlation.  A factor such as AMO contains parts of the temperature record itself so also needs to be treated with care.

My biggest surprise is that more scientists don’t do this kind of modeling. My guess is that it might be too basic and not sophisticated enough in its treatment of atmospheric and climate physics to attract the General Circulation Model (GCM) adherents.

So what follows is a step-by-step decomposition of the NASA GISS temperature record shown below, applying the symbolic regression tool Eureqa as an alternative to the CSALT multiple linear regression algorithm. This series has been filtered and a World War II correction has been applied.

The GISTEMP profile, interval 1940-1944 corrected as a 0.1 adjustment down.

 

Continue reading

Imaging via Particle Velocimetry of Sloshing

This is an interesting paper on capturing the volumetric effects of sloshing:

Simonini, A., Vetrano, M. R., Colinet, P., & Rambaud, P. (2014). Particle Image Velocimetry applied to water sloshing due to a harmonic external excitation. In Proc. of the 17th International Symposium on Applications of Laser Techniques to Fluid Mechanics.
The scale that they describe applies to containers filled with liquid subject to external forces.
Compare against the very large scale equatorial Pacific dynamics

linked from NOAA here

Absolute temperature, from which the anomaly is based

 

Anomaly of temperature. The emerging hotspots are what lead to El Nino conditions.

Forecasting versus Problem-Solving

The challenge of explaining climate phenomenon such as ENSO leads to an interesting conundrum.  Do we want to understand the physics behind the phenomenon, or do we want to optimize our ability to forecast?

Take an example of the output of a crude power supply. Consider that all one has is one cycle of output.

  1. The forecaster thinks that it is fair to use only one half of that cycle, because then he can use that to forecast the other half of the cycle.
  2. The problem solver wants the whole cycle.

Why is the problem solver in better shape?

  1. The forecaster looks at the half of a cycle and extrapolates it to a complete cycle. See the dotted line below.
  2. The problem solver looks at the continuation and discovers that it is a full-wave rectified signal. See the solid line below

In this case, the problem solver is right because the power supply happens to be a full-wave rectifier needed to create a DC supply voltage.  The forecaster happened to make a guess that would have been correct only if it was an AC power supply.

Lose your generality and that is what can happen. As Dara says, the key is to look for  structures or patterns in the data -- while reducing the noise -- and if that means to use as much of the data as possible, so be it.

 

Paper on Sloshing Model for ENSO

I recently archived the paper to ARXIV and submitted to PRL.

Get the paper here from ARXIV as a PDF.

The nearly year-long investigation is time-lined and outlined here.

The final model fit:

Fig 1: Figure from the paper. When the correlation reached 0.8, I thought it might be the right time to stop. The noise in the Darwin or Tahiti time-series was the limiting factor in how good the fit could eventually become.

Thanks for the good comments!

One application of the model described below

Continue reading