I want to drum up lots of interest in the math that they describe in the event that it reveals a simple and elegant formulation. That's what many of the solid state models turn into, much more elegant than what is seen in climate science.

]]>I'll give you a pat on the back if no one else will ðŸ™‚

]]>If you're moving away from that scheme then I'll concentrate on a Monte Carlo gives us current errors and uncertainties and let the old scheme die a quiet death ðŸ™‚

That cuts the amount of data in the MC in half.

]]>Now I have to ry and remember everything I had planned ðŸ™‚

I dp have some preliminary results based on 5 or 6 model runs. These uncertainties are based on the RSS of all the individual months that make up the calculation for the Time Interval in question. While this is debatably accurate, it doesn't tell us anything about the sensitivity of the individual components (ip, drac, anom) or whether Solver is being driven into a different solution space, or what Solver's uncertainty is. Hence the need for a MC approach.

Relative Time Error

1.732% 1880 - 1900 + Bias âˆš

1.734% 1880 - 1920 + Bias âˆš

0.216% 1885 - 1935 - Bias

0.215% 1885 - 1930 - Bias

0.097% 1900 - 1940

0.459% 1st half _ - Bias âˆš

0.041% 2nd half ____ - Bias ?

I include the SIGN of the bias because the uncertainties are not symmetric. For large uncertainties we can expect the model result to be skewed high or low -- and my results checked that. Any starting date value of 1885 or greater essentially has no bias when you compare it to the other uncertainties involved. I..e., yeah, there's bias, but it's lost in the noise. I have a check next to 1st_half, but I would bet if I ran it 100 times it would be close to 50-50.

]]>http://contextearth.com/2017/10/02/identification-of-lunar-parameters-and-noise/

but I should just fix it properly according to your suggestion. It's amazing that this calibration is important but it is!

]]>I could have put it all in one formula using nested if statements, but they become so difficult for others to read and understand that I try to avoid them as much as possible.

Here's a graph of the 'Rel Time' errors due to assuming equal number of days in the month. A you can see, the error drops off quickly. By 1920 the errors are under 0.01%

]]>if (year is not divisible by 4) then (it is a common year) else if (year is not divisible by 100) then (it is a leap year) else if (year is not divisible by 400) then (it is a common year) else (it is a leap year)

Yet since the data spans just over ~100y, it's not clear which exact value to use to approximate this variation in length of a calendar year

]]>