Baby D Model

In the last post on characterizing ENSO, I discussed the trend I observe in the modeling process -- the more I have worked it, the simpler the model has become. This has culminated in what I call a "baby" differential equation model, a model that has been reduced to the bare minimum.

This has partly been inspired by Professor John Carlos Baez's advice over at the Azimuth Project

"I think it's important to take the simplest model that does a pretty good fit, and offer extensive evidence that the fit is too good to be due to chance. This means keeping the number of adjustable parameters to a bare minimum, not putting in all the bells and whistles, and doing extensive statistical tests"

Well, I am now working with the bare minimal model. All I incorporate is a QBO forcing that matches the available data from 1953 to current. Then I apply it to a 2nd-order differential equation modeled as a characteristic resonant frequency \omega_0 .

 f''(t) + \omega_0^2 f(t) = Forcing(t) = qbo(t)

I adjust the initial conditions and then slide the numerically computed Mathematica output to align with the data.

The figure below is the empirical fit to the QBO, configured as a Fourier series so I can extrapolate backwards to 1880.
QBO

Figure 2 is a fit using the first 400 months of ENSO SOI data as a training interval, and validated to the last 800 months (100 years total)
first

The next fit uses the second 400 months of data as a training interval, which is then validated forward and backward by 400 months
second

The final fit using the last 400 months of data, and validates to the first 800 months
third

In each case, the overall correlation coefficient is above 0.7, which is as much as can be expected given the correlation coefficient of the empirical model fit to the raw QBO data.

At this point we can show a useful transformation.

 f''(t) + \omega_0^2 f(t) = qbo(t)

If we take the Fourier transform of both sides, then:

 (-\omega^2 + \omega_0^2) F(\omega) = QBO(\omega)

This implies that the power spectra of SOI and QBO should contain many of the same spectral components, but scaled at values proportional to the frequency squared. That is why the SOI, i.e. f(t) or F( \omega ), contains stronger long time-period components; while the QBO shows the higher frequency, e.g. the principal 2.33 year period, more strongly.

One may ask, what happened to the Chandler wobble component that has been discussed in previous posts?

It is still there but now the wobble forcing is absorbed into the QBO. Using the recently introduced and experimental "FindFormula" machine learning algorithm in Mathematica, the 6+ year Chandler wobble beat period shows up in the top 3 cyclic components of QBO! See the following figure with the yellow highlight. It is not nearly as strong as the 2.33 year period, but because of the spectral scaling discussed in the previous paragraph, it impacts the ENSO time profile as importantly as the higher frequency components.

third

 Discussion

The caveat in this analysis is that I didn't include data after 1980, which is explained by the non-stationary disturbance of the climate shift that started being felt after 1980. This will require an additional 400 month analysis interval that I am still working on.

"Can you tell me in words roughly what these 20 lines do, or point me to something you've written that explains this latest version?"

It is actually now about half that, if the fitting to the QBO is removed. And the part related to just the DiffEq integration computation is just a couple of lines of Mathematica code. The rest is for handling the data and graphing.

The fit to the QBO can be made as accurate as desired, but I stopped when it got to the correlation coefficient as shown in the first figure above. The idea is that the final modeling result will have a correlation coefficient about that value as well (as the residual is propagating an error signal), which it appears to have.

The geophysics story is very simple to explain, as it is still sloshing but the forcing is now reduced to one observable component. If I want to go this Baby D Model route, I can shorten my current ENSO sloshing paper from its current 3 pages to perhaps a page and a half.

The statistical test proposed is to compare against an alternate model and then see which one wins with an information criteria such as Akaike. I only have a few adjustable parameters (2 initial conditions, a characteristic frequency, and a time shift from QBO to ENSO), so that this will probably beat any other model proposed. It certainly will beat a GCM, which contains hundreds or thousands of parameters

 

ENSO redux

I've been getting push-back on the ENSO sloshing model that I have devised over the last year.  The push-back revolves mainly about my reluctance to use it for projection, as in immediately.  I know all the pitfalls of forecasting -- the main one being that if you initially make a wrong prediction, even with the usual caveats, you essentially don't get a second chance.   The other problem with forecasting is that it is not timely; in other words, one will have to wait around for years to prove the validity of a model.   Who has time for that ? :)

Yet, there are ways around forecasting into the future. One of which primarily involves using prior data as a training interval, and then using other data in the timeline (out-of-band data) as a check.

I will give an example of using training data of SOI from 1880 - 1913 (400 months of data points) to predict the SOI profile up to 1980 (800 months of data points). We know and other researchers [1] have confirmed that ENSO undergoes a transition around 1980, which obviously can't be forecast.   Other than that, this is a very aggressive training set, which relies on ancient historical data that some consider not the highest quality. The results are encouraging to say the least.

Continue reading

ENSO Transformation

After playing around with the best way to map a model to ENSO, I am coming to the conclusion that the wave equation transformation is a good approach.

Fig. 1:  Upper is the wave transformation fit to SOI, lower is the difference error signal, very uniform and flat.

What the transformation amounts to is a plot of the LHS vs the RHS of the SOI differential wave equation, where k(t) (the Mathieu or Hill factor) is estimated corresponding to a characteristic frequency of ~1/(4 years),

 SOI''(t)+k(t) SOI(t) = F(t)

and where F(t) is the forcing function, comprised of the main suspects of QBO, Chandler wobble, TSI, and the long term tidal beat frequencies.

I added a forcing break at 1980, likely due to TSI, to get the full 1880 to 2013 fit.

The wavelet scalogram agreement is very impressive, indicating that all frequency scales are being accounted for.

Fig 2: Wavelet scalogram, scale in months since 1880.

This is essentially just a different way of looking at the model described in the ARXIV ENSO sloshing paper. What one can do is extend the forcing and then do an integration of the LHS to project the ENSO waveform into the future.

 

 

The Hidden Harmony of ENSO

With this analysis, I wanted to demonstrate the underlying order of the most concise SOI Model. This model characterizes the salient fit parameters:

  1. Two slightly offset forcing sinusoids which match the average QBO forcing cycle
  2. A forcing sinusoid that maps to the frequency of the Chandler wobble beat
  3. A Mathieu modulation perturbing the 2nd-order DiffEq with a periodicity of about 8 years

This set of four parameters was used to model both modern day records corresponding to the atmospheric pressure data describing the Southern Oscillation Index, as well as to proxy records of historical coral data. The parameters seem to match closely over widely separated time intervals (see Figure 5 in the latter link).

Figure 1 is the modern-day SOI record, suitably filtered to show the multi-year excursions.

Fig 1: SOI Data. The waveform is erratic, to say the least.

It is amazing that this erratic a waveform can be modeled by a limited set of parameters that actually make some physical sense, but that is nature for you and the idea behind "sloppy modeling" -- models that use just a few parameters to accurately describe a behavior. The simplified model strongly suggests that there is a hidden harmony acting to drive ENSO.

Continue reading

Lithium Battery Characterization

In the menu under Stochastic Analysis, I have a white paper called "Characterization of Charge and Discharge Regimes in Lithium-Ion Batteries".

This is a breakthrough on modeling the fat-tail behavior of Lithium-Ion batteries and something that has a lot of practical analysis benefits considering the push toward common-place use of Li+ technology (see Tesla's Powerwall and review).

From the introduction:

"Modeling with uncertainty quantification has application to such phenomena as oxidation, corrosion, thermal response, and particulate growth. These fall into the classes of phenomena governed substantially by diffusional processes. At its most fundamental, diffusion is a model of a random walk. Without a strong convection or advection term to guide the process (e.g. provided by an electric or gravitational field), the kinetic mechanism of a particle generates a random trajectory that is well understood based on statistical physics principles. The standard physics approach is to solve a master diffusion equation under transient conditions. This turns into a kernel solution that we can apply to an arbitrary forcing function, such as provided by an input material flux or thermal impulse. In the case of a rechargeable battery, such as Li+, the flux is charged ions under the influence of an electric field."

Alas, when I tried to submit the paper to ARXIV as a preprint it got rejected. The first time it was rejected due to a mixup in the citation numbering. The second time they said it was removed from the publication queue without exactly saying why, suggesting it be submitted to a "conventional journal" instead.

I do not need that kind of hassle. I can just as easily DIY.

Characterizing Changes in the Angular Momentum of the Earth

Sloshing of the ocean's waters, as exemplified by ENSO, can only be generated by a suitable forcing. Nothing will spontaneously slosh back and forth unless it gets the right excitation.  Some might hold a naive picture that simply the rotation of the earth can cause the sloshing, but we have to remember that this is a centrifugal force which is evenly directed downward with no changes over time. However, this last part -- "no changes over time" -- is only correct to zero-th order.  Two classes of mechanisms can disrupt this constant rotation rate.  First, the Chandler wobble of the earth's axis causes a continuously changing angular momentum of a point of reference. That is a general wobble mechanism similar to the spinning of a top. The second class is of nonspecific events that can either speed up or slow down the rotation rate of the earth.  Note that the wobble may be a behavior that actually belongs to this class, as it may be hard to distinguish the specific mechanisms behind the change in rate. Both of these mechanisms have been measured and they both indicate a clear periodic signal of approximately 6 years.   The Chandler wobble was described in a previous SOI modeling post and it shows a strong average period of 6.45 years see Figure 1 below.

Fig 1: The Chandler Wobble leads to a continuous change of angular momentum of a point of reference, in this case the north pole. A phase shift occurs between 1920 and 1930, but otherwise the period is relatively fixed at 6.45 years, which is the beat frequency of the 433 day wobble and the calendar year.

Continue reading

Climate Science is just not that hard

From Andy Lacis, this is motivation to keep on looking at what Isaac Held calls the "fruit-fly" models of climate science.

Rabbet Run blog -- More from Andy Lacis

“It would seem more appropriate to assign "wickedness" to problems that are more specifically related to witches. The climate problem, while clearly complex and complicated, is not incomprehensible. Current climate models do a very credible job in simulating current climate variability and seasonal changes. Present-day weather models make credible weather forecasts – and there is a close relationship. Most of the cutting edge current climate modeling research is aimed at understanding the physics of ocean circulation and the natural variability of the climate system that this generates. While this may be the principal source of uncertainty in predicting regional climate change and weather extreme events, this uncertainty in modeling the climate system’s natural variability is clearly separate and unrelated to the radiative energy balance physics that characterize the global warming problem. The appropriate uncertainty that exists in one area of climate modeling doe not automatically translate to all other components of the climate system.”

I am continuing to work on the ENSO model, and as it gets simpler, the fit improves.  I started using an ENSO model that combines the SOI metric at 2/3 (Tahiti and Darwin) with the (negative to match SOI) NINO34 index at 1/3 (another interesting variation to consider is a median filter picking the middle value) This is a fit with a correlation coefficient of >0.80.

Fig 1: Recent SOIM fit. Yellow indicates regions of "poorer" fit

Another metric to consider is a variation of a binary classifier. The idea is simple. Since the model and data both show an oscillation about zero, then just by counting the number of times that both data and model agree with respect to positive or negative excursions, one can conveniently estimate fitness.  As it just so happens an 80% agreement in excursion classification also corresponds to around a 0.80 correlation coefficient.

Fig 2: Binary classifier for estimating correlation of model.

There is a limit to how high the CC can go, since the correlation between SOI and NINO34 tops off at about 0.86. This really has to do with nuisance noise in the system and the inability to identify the true oceanic dipole with confidence. So, for all practical purposes, a CC of 0.80 or identifying + or - excursions at 80% accuracy is quite good, with diminishing returns after that (since the fit is to the noise).

The other metric that I am exploring is an estimate of the agreement between two wavelet scalograms.   As one can see below, the fit in 2 dimensions appears quite good and the extra degrees of freedom provide better discrimination in identifying the better fit between two models.

Fig 3: Wavelet scalograms.

I am asking the participants at the Azimuth Forum as to how best to create an effective CC for comparing wavelet scalograms, but have had no response so far.


BTW, Archived discussions of the ENSO Revisited thread at the Azimuth Forum

http://web.archive.org/web/20150426142611/https://forum.azimuthproject.org/discussion/1608/enso-revisit/p1

http://web.archive.org/web/20150426142718/https://forum.azimuthproject.org/discussion/1608/enso-revisit/p2

Also a battle I had over at Real Climate  with a neo-denier. The moderators at RC dispatched the fella to the Bore Hole thread.

The Oil Shock Model Simplified

DC at the OilPeakClimate blog spent some time at re-analysis of the Oil Shock model and Dispersive Discovery model which were originally described in The Oil Conundrum Book (see menu above).

Whenever a model is re-analyzed, there is the possibility of improvement. In DC's case, he made a very smart move to try to separate the extra-heavy oil as a distinct process. The Shell Oil discovery data appears to combine the two sets of data, leading to a much larger URR than Laherrere gets. What he accomplished is to reconcile the lighter-crude Laherrere discovery data and the reality that there are likely ~500 GB of extra-heavy crude waiting to be exploited. Whether this effectively happens is the big question.

Read DC's whole post and potential discussion here, as he has made an excellent effort the last several years of trying to digest some heavy math and dry reading from the book.   He is also making sense of the Bakken oil production numbers in other posts.

As a PS, I have added an extra section in the book to describe the dispersive diffusion model describing the Bakken production numbers.


DC also posted this piece on the PeakOilBarrel blog. Almost 500 comments there.

CSALT re-analysis

I have previously described a basic thermodynamic model for global average temperature that I call CSALT. It consists of 5 primary factors that account for trend and natural variability. The acronym C S A L T spells out each of these factors.

C obviously stands for excess atmospheric CO2 and the trend is factored as logarithm(CO2).

S refers to the ENSO SOI index which is known to provide a significant fraction of temperature variability, without adding anything by way of a long-term trend. Both the SOI and closely correlated Atmospheric Angular Momentum (AAM) have the property that they revert to a long-term mean value.

A refers to aerosols and specifically the volcanic aerosols that cause sporadic cooling dips in the global temperature measure. Just a handful of volcanic eruptions of VEI scale of 5 and above are able to account for the major cooling noise spikes of the last century.

L refers to the length-of-day (LOD) variability that the earth experiences. Not well known, but this anomaly closely correlates to variability in temperature caused by geophysical processes. Because of conservation of energy, changes in kinetic rotational energy are balanced by changes in thermal energy, via waxing and waning of the long-term frictional processes. This is probably the weakest link in terms of fundamental understanding, with  the nascent research findings mainly coming out of NASA JPL (and check this out too).

T refers to variations in Total Solar Irradiance (TSI) due mainly to sunspot fluctuations. This is smaller in comparison to the others and around the level predicted from basic insolation physics.

Taken together, these factors account for a correlation coefficient of well over 90% for a global temperature series such as GISTEMP.

Over time I have experimented with other factors such as tidal periodicities and other oceanic dipoles such as North Atlantic Oscillation, but these are minor in comparison to the main CSALT factors. They also add more degrees of freedom, so they can also lead one astray in terms of spurious correlation.  A factor such as AMO contains parts of the temperature record itself so also needs to be treated with care.

My biggest surprise is that more scientists don’t do this kind of modeling. My guess is that it might be too basic and not sophisticated enough in its treatment of atmospheric and climate physics to attract the General Circulation Model (GCM) adherents.

So what follows is a step-by-step decomposition of the NASA GISS temperature record shown below, applying the symbolic regression tool Eureqa as an alternative to the CSALT multiple linear regression algorithm. This series has been filtered and a World War II correction has been applied.

The GISTEMP profile, interval 1940-1944 corrected as a 0.1 adjustment down.

 

Continue reading

Imaging via Particle Velocimetry of Sloshing

This is an interesting paper on capturing the volumetric effects of sloshing:

Simonini, A., Vetrano, M. R., Colinet, P., & Rambaud, P. (2014). Particle Image Velocimetry applied to water sloshing due to a harmonic external excitation. In Proc. of the 17th International Symposium on Applications of Laser Techniques to Fluid Mechanics.
The scale that they describe applies to containers filled with liquid subject to external forces.
Compare against the very large scale equatorial Pacific dynamics

linked from NOAA here

Absolute temperature, from which the anomaly is based

 

Anomaly of temperature. The emerging hotspots are what lead to El Nino conditions.