REAL DATA Proves Northern Hemisphere Cooling for Last 140 Years

Detailed new statistical analysis of  original datasets freely, accessed at the global depository of daily temperature data, shows the ACTUAL temperature record proves a cooling trend for the northern hemisphere over the last 140 years.

The very claim of climate alarmists that, “we know that the global worming is happening, the science is solid and cannot be challenged” is therefore demonstrably unfounded.

The study author, Dr Darko Butina (Chemistry) explains:

“Global warming scientists have chosen is to destroy ALL the evidence of the real temperatures by averaging all that information into a single number which has no physical meaning and then come to conclusion that since 1960 our planet is ‘burning fever’. However, if one analyses the original data that the global annual temperature is based upon, as I did, a completely opposite result is being obtained.”

Dr Butina writes:

 “I am the first scientist who challenged this false logic, not by plotting different versions of annual global temperatures, but looking at the original measured data that are the source of their model.

This author’s first paper in 2012 proved there is NO ‘Hockey Stick’ in temperature data generated by calibrated thermometer (check the paper labelled #1 using Armagh dataset).

That ‘Armagh’ study has been viewed over 35,000 times online. It has adduced no negative criticism by any established scientist working in this field.

Darko Butina is keen to emphasize that all his work is done on the original datasets that are in public domain, explaining:

“I do not modify data, I use standard statistical tools and some of my proprietary software that have been published and is well understood by the scientific community. The fact that anyone can download the same dataset and use the same statistical tools means that every aspect of my work can be independently verified.”

Full paper below:

First instrument-based evidence that the dominant temperature trends for last 140 years in Northern Hemisphere was that of cooling

Dr Darko Butina

Introduction

Earlier this year, my latest two papers published in the International Journal of Chemical Modelling: ,www.novapublishers.com/catalog/product_info.php?products_id=63522&osCsid=b1ca7add2f5351a6b1c8637048e26ddb,

and can be accessed free of charge on my website www.l4patterns.com labelled as papers 5 and 6. Between the two papers, 33 weather stations were analysed covering USA, Canada and Eurasia. In total 1,500,000 of maximum daily temperatures, tmax, were analysed with temperatures as low as -50.0C and as high as 54.0C, with total range of 104.0C.

The two papers deal with the same topics which is the analysis of maximum daily temperatures across the Northern Hemisphere and partitioning each temperature reading between three classes: normal, extremely hot or extremely cold. The proposed classification protocol is based on the ideal bell-shaped distribution curve where all datapoints between the +/-2 standard deviations from the mean were labelled as normal, those below -2 standard deviations as extremely cold (e.cold) and those above +2 standard deviations as extremely hot (e.hot). For those who have a some knowledge of statistics and understand the physical meaning, or to be more precise, the physico-chemical property of molecules called “temperature” should go straight to my website www.l4patterns.com and read publications 5 and 6.

For those who find the papers too difficult to follow, I produced this summary of the papers which explains in more details two key topics that are fundamental in understanding their scope:

  1. Physico-chemical properties of molecules called temperature
  2. Concept of distribution curve and z-score

The main reason for publishing two papers dealing with the same topic is the sheer size of the datasets containing over 1,500,000 datapoints against the limited allowance of number of pages per single publication.

One more important point to emphasize, before we start this summary, is that those two papers represent the first known publications in the scientific literature that analyse daily temperature readings without any modification to the original archived data, for example, reducing the whole dataset to the mean of the original datapoints. The obvious benefits of analysis of the original recordings is that anyone who is trained in analysis of datapoints generated by a calibrated instrument can independently reproduce this work which uses standard statistical tools and archived datasets that are available free of charge. All the links to the datasets used in those two publications can be found at the mentioned papers at my website www.l4patterns.com.

Let us start with the statistical part of the two research publications.

Use of Distribution Curve in design of the classification protocol used in the two papers

The notoriously difficult problem to solve in physical sciences, is to assign some abstract terms to a range of numbers.

For example, if one downloads 1 year of daily temperature readings and ask the questions, for example, what the coldest and the hottest day for that year is, the answer could be found by simply sorting that column in excel and find the answer. However, if one asks the question such as, which of those temperature readings are unusually cold or hot, then things get very complicated since it will depend on some sort of classification system where the person doing the analysis has to decide which temperature ranges to assign to which class.

So, let me introduce the bell-shape distribution curve and the huge impact that it had in many different classification problems. By the way, the distribution curve is such an important statistical tool that detailed analysis of it can be found in any basic book on statistics or numerous lectures on that topic are available on universities’ websites.

The main points of the above graph are that any dataset is expected to have:

  • about 68{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of its datapoints within +/- 1 standard deviations from the mean
  • about 95{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} within +/- 2 standard deviation around the mean, and
  • the remaining 5{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of datapoints equally distributed in the tails of the curve
    • 2.5{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} to the left of the mean, and
    • 2,5{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} to the right of the mean

As a rule of thumb in physical and medical sciences, the datapoints that are between +/- 2 standard deviations from the mean are labelled as “normal”, while those datapoints that are outside “normality cut-off points” of +/- 2 standard deviations from the mean labelled as tails of the distribution curve and considered as “extreme”, but statistically significant.

The key step in generating the distribution curve is to transform any given dataset into the universal distance space in which each datapoint is expressed in a number of standard deviations from the mean. That distance from the mean in standard deviations is known as the z-score. Since the distance from the mean can be either to the left or to the right of the mean, a negative sign is assigned to all datapoints that are to the left of the mean, while those to the right have a positive sign, or no sign at all. The mean, by definition, has the z-score = 0. Please note that very often the terms below or above the mean are used instead of left or right. Also, the term SD will be also used in this report as an acronym for standard deviation.

So, if the z-score is, say -2.3, that means that the datapoint is 2.3 standard deviations to the left of the mean, while the z-score = 1.8 means that it is 1.8 standard deviations to the right of the mean.

Since the concept of a normal distribution curve is the key part of the classification protocol that will be used in the papers, let us simplify Figure 1 and define the key reference points for the classification scheme:

Now we can summarise our classification protocol as:

Normal:               -2.0 > = z-score < = 2.0    (between cut-off points A and B)               (1)

e.cold:                  z-score < -2.0      (to the left of cut-off point A)                                      (2)

e.hot:                    z-score > 2.0       (to the right of cut-off point B)                                   (3)

So, the datapoint with a z-score = -3.4 will be classified as e.cold, the datapoint with z-score = 1.2 as normal, while the datapoint with z-score = 2.1 will be classified as e.hot.

The procedure of transforming the dataset into the universal distance space of standard deviations is a very simple one and can be easily automated in excel.

  1. Download the dataset containing the dates and temperature reading into excel
  2. Calculate the mean and the standard deviation for the whole set using functions in the excel ‘average’ (which strictly speaking is the mean) and ‘stdevp’ which calculates standard deviation
  3. Apply the transform formula to each datapoint: z-score = (X – Mean)/SD where X is the original reading, while the Mean and SD are the mean and the standard deviation for the dataset
  4. The whole process of transforming, say, 50,000 tmax temperatures to their z-scores takes only few seconds of computational time
  5. The next step is to either use the macro called ‘countif’ or simply sort the whole excel table on the z-scores and get the counts of each class
  6. The ratio between the e-cold and e-hot class tell us which of the extreme temperatures were dominant for the dataset analysed

Definition of the term temperature and brief history of archived temperature datasets

The first calibrated thermometer was created by Daniel Gabriel Fahrenheit, a German-Dutch scientist, in 1724, the year that can be labelled as the year when modern science has been born. The huge importance of that invention cannot be emphasized enough. For the first time, there was an instrument that had a single function – to measure kinetic energy, also known as the energy of motion, of molecules that surround that thermometer. There are three symbols associated with a calibrated thermometer and each has different calibration protocol, 0C, 0F and K. So, when a statement is made that an observed air temperature is 14oC it means that it was measured with a calibrated thermometer, in other words, the term temperature associated by symbol 0C means one thing and one thing only – that the datapoint has been generated by a calibrated thermometer and that this number is the evidence of the kinetic energy of molecules that have surrounded that thermometer at a given time and a given location.

The key point of this paragraph on the real temperatures is that since the calibrated thermometer reflects the kinetic energy of molecules that are surrounding and interacting with that thermometer, the dataset in question simply reflects temperature patterns that are unique to the geographical location of that thermometer. What is fundamental to understand is that our planet is a very complex network of local temperatures and before we can make any statements about global temperature patterns, we must first understand the local temperature patterns.

Only if every single local pattern moves in the same direction, either heating up or cooling down, can one describe those trends as “global”.

Let us digress briefly and demonstrate the huge importance that the readings from a thermometer can have on the future of our planet.

At the bottom of the food chain on the land are the plants which feed using the process of photosynthesis which can be described in a simplistic formula:

Water + CO2 + sunlight = Sugar + Oxygen

The most important part in that equation is that the water molecule must be in its liquid state for the process of photosynthesis to work. When thermometer reaches 00C (melting point of water is zero degrees C!) it means that if more heat energy from the sun becomes available, i.e. warming, a water molecule will change its state from solid (ice) to liquid, the plants will start to grow and the whole feeding process starts up the food chain. When the temperature of the water molecule reaches 1000C (boiling point of water), the kinetic energy will start to change the water molecule state from liquid to a gas phase and if that was to happen to our planet, the life as we know it would be gone. If the temperatures around the globe dropped below the freezing point of water and persisted for long period of time, the result be a frozen Earth which would have a catastrophic consequences for the most species.

The take-home message is that the readings generated by a calibrated thermometer are not just some numbers that can be averaged, but that those numbers have a physical meaning that can make the difference between the life and death. It also highlights the importance of understanding the underlying physical processes that each calibrated instrument is detecting when analysing datasets that are generated by a calibrated instrument.

So, let us now start with a very simple example to put temperatures and statistics together and then look at the temperature patterns of the historical readings for a single weather station followed by the summary of the temperature patterns of the Northern hemisphere for the last 140 years.

Example 1: Compare year 1970 (randomly selected) for Barrow and Death Valley

One year of data was chosen for simplicity reason since 365 datapoints can be easily graphed and visualised. Barrow is a small town in the Arctic Circle while Death Valley is the hottest desert in the USA and this small exercise will demonstrate the power of the use of the distribution curve principle to this new classification protocol.

Please note that each day of the year numbers from t1-1 (temperature read on Jan 1) to t12-31 (temperature read on Dec 31).

As we can see, the two temperature patterns are represented by two parallel lines which makes it very difficult to make some sensible comparisons. However, if we now transform the thermometer readings into their corresponding z-scores, a completely different picture can be seen which will enable us to quantify the similarity between the two patterns:

Now that we have done the analysis let us look at the table and see what conclusion could be made:

The most important observation is that 97.8{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} and 99.7{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of temperature readings at Barrow and Death Valley respectively, are classified as normal or as nothing unusual! Also, the extreme cold days at Barrow outnumber the extreme hot days by 5 to 3, and 1 to 0 in Death Valley.

Can you see anything alarming there – I can’t?

Example 2: Analysis of daily temperatures observed at Willow City, ND 1892 – 2016

Willow city is a small town in North Dakota (USA) near the Canadian border and can be best described as a good representative of the rural temperature patterns observed in the continental part of the North American continent. This dataset has 43,765 maximum daily temperature observations collected over 125 years and following the steps described earlier, the following distribution of daily temperature distributed between the three classes was obtained:

Before we continue let me remind the reader that this whole work described in those two papers was NOT about approving or disapproving theory of Global Warming, which is only a computational model based on purely theoretical number called ‘global temperature’ that does not exist, but about designing a classification protocol that allow us to compare very different local temperature patterns and simply count the numbers in three classes.

It is also important to emphasise that the bell-shaped distribution curve is an ideal statistical system that is rarely observed. In the ‘real’ datasets distribution curve is usually skewed either to the right (hot tail) or to the left (cold tail) of the mean:

Figure 5. Real life data are usually skewed either to the left (B) or right (C)

If we look back to Table 2, two things are clear:

  1. 97.45{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of all temperature readings are considered normal, and
  2. The distribution curve is heavily skewed to the left since the ratio between e.cold/e.hot class is 92 to 1, (1102/12)

Going back to the temperature patterns observed at Willow City, note the following distribution of the extreme hot (red) and extreme cold (blue) temperatures between 1892 and 2016:

As it can be seen, the extreme cold temperatures (blue) have been observed continuously in each decade since 1892, while only on two occasions did extreme hot temperatures (red) have been recorded since 1960. This is in total contrast with Global Warming theory claiming

  1. That every single year since 1960 was alarmingly hot, and
  2. That since mid-1800s all years have been classified as ‘normal’ and no extreme cold events exist in the global warming theory nomenclature

So, if we look back at Table 2 which shows that 97.45{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of tmax temperatures at the Willow City’s archives since 1892 have been classified as normal and also that 1,102 out of 1,124 days, 98{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117}, were classified as extreme cold, I would again ask the same question as in example 1, do we really need to worry about the future of our planet in terms of this mystic catastrophic overheating? I think NOT.

Summary for Willow City in terms of actual temperatures

So far, the analysis of the Willow City temperature data was concentrated on the z-scores which in turn have been at the core of the classification protocol, but since there is a simple mathematical relationship between the measured temperatures and their corresponding z-score it is extremely simple to produce a table that summarises the key temperature cut-off points and ranges:

Let me demonstrate how a combination of a simple mental arithmetic and use of calculator can allow a quick assessment of the key temperature patterns for any given weather station.

  1. To find out the total ‘normality’ range in terms of degrees C, one needs to add and takeaway 2 standard deviations from the mean. Since one standard deviation for the Willow City is ‘worth’ 150C (Table 3), two standard deviations are equivalent to 30oC. Therefore, the mean of 10oC plus 2 SD of 30oC gives us the upper normality boundary at 40oC. At the other end, the mean of 10oC less 2 SD of 30oC gives us the lower (cold) normality at 10 – 30 = -20oC. this simple calculation tells us that for the citizens of Willow City in last 140 years, it was quite normal or nothing unusual to live at the temperatures between -20oC and +40oC (60oC range)!
  2. The maximum observed temperature at Willow City was 43.3oC and since the upper normality cut-off point was 40.2oC it follows that the extreme hot range was only 3.1oC above the 40.2oC upper normality cut-off.
  3. At the cold tail end, the minimum observed temperature of –35oC is 15oC below the ‘cold’ normality cut-off point of -20oC (-35 – (-20))
  4. So, in case of Willow city we can now make the following statement
    1. 5{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of temperature readings can be labelled as normal
    2. The extreme cold class dominates the corresponding hot class by 92:2 in terms of population of two classes, and
    3. The cold tail extends 15oC outside the lower normality boundary, while the hot one only 3.1oC outside the upper normality boundary

Overall summary for the whole of the Northern Hemisphere

What is now left to decide is whether the temperature patterns observed in a small rural town of Willow City in the continental part of the north American continent, are representative of the rest of the Northern Hemisphere?


The two key numbers to note is that 98{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of over 1.5 million temperature readings are classified as normal and that in all weather stations, without a single exception, it is the extreme cold tail that was dominant on our planet since the late 1700s to present day.

The most important point of Figure 8 is that the maximum z-scores (red), which correspond to the maximum observed temperatures at individual weather stations, cannot be distinguished from the upper normality boundary (B), while the minimum z-scores (blue), corresponding to the minimum observed temperature, are clearly separated from the lower normality boundary. In temperature terms, the average temperature difference between minimum observed temperatures and the lower normality boundary for the class e.cold is 15.6oC, while for the class e.hot is only 3.2oC.

One way to interpret the graph above is that it is impossible to separate the ‘normality boundary’ from the maximum observed temperatures, i.e. there is no overheating going on in the USA in last 140 years, but there is definitely a strong cooling signal being observed for the same period. Almost identical patterns have been observed for the rest of the North Hemisphere.

By far, the most important part of these two papers, and all the previous ones that I published since 2012, is that ALL the datasets are obtained from the world depository of close to 30,000 weather stations and can be accessed free of charge via links and references in the relevant papers on my website www.l4patterns.com.

So, all you must do is to read the two papers, use your own logic and reasoning to make your own conclusion as to whether there is any reason for any alarm, and if you don’t believe the numbers I presented, go ahead and download the original data, follow the classification protocol and prove me wrong!

And, if you do accept those numbers, then ask yourself the following: If those numbers are correct, and therefore there is no alarming warming going on, where is this CO2 ‘blanket’ coming into the equation? The fundamental problem with the theory of global warming is that the theory is trying to explain the causes of our planet overheating, since ‘we all know that our planet has been warming in last hundred years or so’, but as you have seen in these two papers, those phantom warming trends cannot be found in instrumental data! In other words, it is nonsensical to look for the causes of overheating when there are none!

Also remember that if you ask any experimental carbon-based chemist or plant biologist, the specialists who know everything there is to know about CO2, whether the molecule of CO2 can have any warming effect, they would tell you that this is not possible.


PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. Telephone: Calls from within the UK: 020 7419 5027. International dialling: (44) 20 7419 5027. 

Please DONATE TODAY To Help Our Non-Profit Mission To Defend The Scientific Method.

Trackback from your site.

Comments (4)

  • Avatar

    Dr Pete Sudbury

    |

    This paper is a travesty of the scientific method and should be taken down.
    All the author has done is possibly to demonstrate that temperature values are not normally distributed about the mean, viz:
    He says 98% of temperature readings are classified as “normal”: but in a normal distribution only 95% of readings on a normal distribution fall between within 2 standard deviations of the mean. So either his applied standard deviation is incorrect, or the data points are not normally distributed.
    Of the remaining 2%, there is a very high preponderance of readings below 2SD of the mean compared to above it. That again disproves that assumption of a normal distribution, and means that analysis needs to use non-parametric statistics. …and that any conclusions from using parametric stats *( assume a normal distribution) are invalid.
    When that author says: “…two papers represent the first known publications in the scientific literature that analyse daily temperature readings without any modification to the original archived data, for example, reducing the whole dataset to the mean of the original datapoints”, then, in their explanation of the process, second point, begins with “calculate the mean and standard deviation for the whole set” followed by a number of increasingly abstruse transformations (see appended list), I was left wondering what on earth was meant by “without any modification”.
    If I understand correctly, the author may possibly have demonstrated that it is impossible to detect a warming signal which is about 1 degree celcius in datasets with a range of over 100C, analysed using the wrong type of statistics. As I remarked earlier, he has very probably demonstrated that the distribution curve is skewed towards colder temperatures, and, possibly, that the distribution curve has remained that way throughout the last 100 years, which might be a worthwhile scientific finding, probably more easlity derived by using a scatter diagram of temperature points, though after putting the original data through the mangle, I wouldn’t be surprised if it turned out the opposite is the case.

    Appendix “using daily temperature readings without any modification to the archived data”…
    Calculate the mean and the standard deviation for the whole set using functions in the excel ‘average’ (which strictly speaking is the mean) and ‘stdevp’ which calculates standard deviation
    Apply the transform formula to each datapoint: z-score = (X – Mean)/SD where X is the original reading, while the Mean and SD are the mean and the standard deviation for the dataset
    The whole process of transforming, say, 50,000 tmax temperatures to their z-scores takes only few seconds of computational time
    The next step is to either use the macro called ‘countif’ or simply sort the whole excel table on the z-scores and get the counts of each class
    The ratio between the e-cold and e-hot class tell us which of the extreme temperatures were dominant for the dataset analysed

    Reply

  • Avatar

    Robert Grutza

    |

    Humans will never control Earth’s climate. Earth doesn’t care we are here. Don’t spend so much time worrying about inane, trivialities. There is a good chance some natural event will wipe out humanity, and the Earth will still be here.

    Reply

  • Avatar

    Macha

    |

    Earth may have even produced humans with the purpose of rebalancing some past or future existence or event…it may have needed more CO2 or less fossil fuel or a species to alter the number of other species…we, Humans, are simply ignorant if that with our hubris. Reminiscent of hitchhikers guide to the galaxy.

    Reply

  • Avatar

    ron cirotto, P.Eng., BASc, Chemical Engineering

    |

    Yes, this paper is a travesty! The travesty is misuse of mathematics to describe an intensive thermodynamic variable, TEMPERATURE! Temperature is not a measure of how much of something but a simple description of the level of vibration of molecules at that point in an infinite field of temperatures.
    Temperature is not a single value in any physical system except a system that is in Thermodynamic Equilibrium. If you do not understand this concept then stop reading now and please read an introductory course to Physics 101. It is that simple and sadly, most people and quite a few scientists do not understand this simple concept. So much for our Western Education system being the best. First of all, who is Dr Darco Butina? Who is Dr Pete Sudbury? What is there educational background? The raw data itself should have been shown in its natural form. A plus or minus variation of approximately 0.8 C over 100 years is not exactly a Thermodynamic crisis!

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via