New Study: Irish Potato Famine (1846) Caused by Very Warm Winter

New algorithm technique used on official temperature records in Ireland uncovers astonishing proof that Ireland’s notorious 1846 potato famine was linked to an exceptionally warm winter. The catastrophe, which caused one million deaths due to starvation, triggered mass peasant migration to America.

The findings appear in the ground-breaking paper ‘New Algorithm to Identify Coldest and Hottest Time Periods. Case Study: Coldest Winters Recorded at Armagh Observatory over 161 Years between 1844 and 2004.’

The paper’s author, Dr Butina made the discovery while working to overcome the long-standing problem faced by government climate scientists who place high reliance on a ‘global model’ of temperature, which is scientifically inaccurate and unreliable. In a spectacular demonstration, Dr Butina’s new algorithm provides a more scientifically accurate analysis for revealing especially warm and cold winters.

Dr Butina’s paper appears in the International Journal of Chemical Modeling (Volume 7, Number 3). Butina is regarded as the only scientist known to be analyzing the official daily temperatures archive.

Reacting to the findings was a highly-impressed John Butler MBE,  Emeritus Professor at Armagh Observatory. Professor Butler commented that the discovery that:

” the winter months of January and February for 1846 were exceptionally warm is of particular interest for our background knowledge of meteorological conditions leading up to and covering the outbreak of potato blight in 1846/47 which resulted in the Great Famine and lead to mass emigration from Ireland in the second half of the 19th century.

Dr Butina explains why the new method is better than the consensus way of merely using averages of recorded temperatures:

“This is important because every data point has its own physical meaning and has to be accounted for. Remember, a body of temperature recordings is no longer a ‘temperature’ but a number without any physical meaning.”

The publishers, the International Journal of Chemical Modeling, are dedicated to topics that deal with numerical analysis of calibrated instrument’s generated data and as such requires detailed knowledge in area of statistics, datamining, machine learning and development of various types of algorithms.

Below is a summary that non-scientific readers can gain some understanding of complexity involved in analysis of daily temperature archived datasets.

The first, and the most difficult part to address is that of understanding the term temperature without which the paper could not be understood. For all the scientists involved in physical sciences, except for those in so called ‘climate sciences’, the term temperature has single meaning – a temperature measured by a calibrated thermometer reflects a kinetic energy, the energy of motion, of molecules that are in contact with thermometer. Therefore, any thermometer-generated datapoint must be referenced to the location of thermometer, time of observation and which molecules are surrounding it. For example, the title the paper is:

New Algorithm to Identify Coldest and Hottest Time Periods. Case Study: Coldest Winters Recorded  at Armagh Observatory Over 161 Years between 1844 and 2004

The title tells the reader that thermometer is part of the Armagh’s Observatory (small town in Northern Ireland) where two daily temperatures were recorded since 1844, the maximum daily temperature usually labeled as tmax, and minimum daily temperature labeled as tmin. Since only tmax temperatures were used in analysis, it follows that one year of data consists of 365 tmax readings and since the dataset used covers 161 years of readings, one has to analyse 58,765 datapoints (365×161).

Every single field of experimental sciences deals with some aspect of physico-chemical properties of molecules and each calibrated instrument that is used is associated with a unique symbol which tells experimentalist which instrument has been used to generate set of datapoints.

There are three different types of calibrated thermometers and each has unique symbol attached to that datapoint

  • OC is assigned to thermometers calibrated according to Celsius
  • OF is assigned to thermometers calibrated according to Fahrenheit
  • OK is assigned to thermometers calibrated according to Kelvin

This brings us to the issue of so called ‘annual global temperature’, a purely theoretical number that has no physical meaning, cannot be measured and as such cannot be evidence of anything. And yet, it has been called ‘temperature’ and assigned symbols that can only be used when associated with calibrated thermometer, like OC!

To finish with this introduction, let me give you some facts and let you decide what makes sense and what does not.

Figure 1. Infamous Hockey Stick downloaded from NASA/NOAA website, the official Global Warming Site

Key points about Figure 1:

  • All the datapoints are calculated, cannot be measured and therefore that is a computational model and NOT evidence of anything
  • The global annual temperatures between 1880 and 2015 vary +/- 0.6 around the mean of 14.00 (remember, those are numbers without any physical meaning and therefore NOT temperatures!)
  • According to that model all years up to 1936 were below the mean, while all the years after 1977 to present day are above the mean
  • The total range of datapoints is 1.2

Real temperatures vs ‘Global Model’:

Figure 2. Maximum and Minimum Temperatures on our planet measured by calibrated thermometer

The records show that 56.7C was the hottest temperature recorded in Death Valley, USA, on July 10 1913, while the coldest one was recorded at Vostok Station at Antarctica at -89.2C on July 21 1983. If one plots the model’s temperatures (Fig 1) against the real (measured) temperatures with total range of 145.9C (Fig 2) it becomes obvious even to non-specialists that the model has absolutely nothing in common with physical reality of our planet. To put it in numerical terms, the total range of the model is 1.1 against 145.9C observed and therefore the model represents 0.75{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (1.1/145.9*100) of the observations, i.e. the model is just a noise in real data!

Now to the paper entitled

‘New algorithm to identify coldest and hottest time periods. Case study: Coldest and Hottest winters at Armagh Observatory over 161 years between 1844 to 2004’

The obvious questions to ask are

  1. Who wants to know about the coldest winters?
  2. Why do we need new algorithm to do it?

The answer to the first question is that every local authority across the northern Hemisphere are interested in unusually cold winters since they are associated with death of old and very young, heating, transport and food supply.

The answer to the second question is really alarming – I am the only scientist, at the moment, who is publishing since 2012 papers on patterns in daily temperature data using the archived thermometer readings from weather stations across the globe and internationally recognized archived datacenters.

There is a free to use search engine called Google Scholar where one can find published research papers on any subject or a person. So, search like ‘patterns in daily temperatures’ will give you lot of hits, but if you follow each of those hits you will always find that the paper starts with daily temperatures at different locations and finishes with the annual mean. To remind you, the mean of number of thermometer recorded temperatures is no longer a ‘temperature’.

One of the notoriously difficult question to answer, like whether the winter  of 1844 is colder or hotter than winter of 2004 is impossible to answer in an unambiguous way:

Figure 3. Difference between 1844 and 2004 winters: 1844 warmer (red) and 1844 colder (blue); winter is defined in this paper as first 60 days of a year

Difficulty in declaring one winter as colder/warmer is the ‘cross-over’ problem, where one winter is few days warmer, than few days colder than another one in seemingly random way that we do not understand. To overcome this cross-over problem, the new algorithm was designed where rather than comparing two winters, we compare each winter against the reference points I called wMIN, i.e. an ultimate winter (winter minimum):

Table 2. Creating new reference points for comparison

The above table is using real data for only first 6 days of the ‘winter’ as a simple example of the key step of the algorithm. For each day of the winter which consists of 161 datapoints for each year between 1844 and 2004, and for each of those days, the minimum and the maximum temperature was identified using excel functions max and min. The bottom two rows contain wMAX (winter MAX) and wMIN (winter MIN) which are represented by maximum and minimum readings on that particular day and therefore the observed maximum and minimum boundaries of the dataset:

Figure 6. Boundaries for 6-datapoints winters, wMAX (red) and wMIN (blue) with a random winter, w2001 (green), inside the two boundaries.

When we want to identify the coldest winters, each winter will be compared to the wMIN (the cold reference pattern), while the process is reversed when the hottest winters are being identified where wMAX is used as a hot reference pattern. This brings us to the key part of the algorithm which is based on calculating distance between each winter to the one of the boundaries of the dataset, either wMIN or wMAX, rather than calculating distance between two actual winters.

So, how is a distance between two multidimensional patterns being calculated? There are around 20 different similarity indices in the specialised field of Chemoinformatic and depending on the problem one is selected. In our case here, the preferred one is one of the golden standards, the Euclidean Distance, to be referenced as ED, for convenience, throughout the rest of the paper:

To calculate ED one has to square the difference between each datapoint pair, get the sum of those squared differences and then take the square root of the sum. By the way, the minimum value that ED (Euclidean Distance) can achieve is 0, when two patterns are identical, to a very large number. Larger ED is, more different two patterns are.

The winters recorded at Armagh Observatory between 1844 and 2004:

Figure 8. Coldest (blue, min) and hottest (red, max) daily records for tmax1 to tmax60
at Armagh (1844-2004). All individual winters are between the two extremes

Table 4. Ten coldest (left) and ten hottest winters (right) between 1844 and 2004

So far, our ranking algorithm has sorted 161 winters from the coldest, 1963, to the hottest, 1846, by their Euclidean Distance to the ‘cold’ reference point, wMIN.

The next step is to identify winters which are either unusually/extremely cold or hot using a statistical parameter called z or z-score which tells us how many standard deviations each datapoint (ED) is from the mean of the data’s distribution. To calculate the z-score we need to transform the ED using the following formulae:

X is a ED from the wMIN, µ is the mean of all EDs and σ is the standard deviation of EDs. In short, we have transformed the distance matrix of Euclidian Distances to the wMIN, to the ED’s distances in standard deviations to the mean of all EDs.

Since detailed discussion about z-scores and normal distribution is outside of the remit of this paper, let me just summarise the topic:

Figure 9. Normal distribution curve and z-scores.

Assuming normal distribution, all the EDs that are 2 or more standard deviations from the mean in either direction are considered unusual or extreme. In case of normal distribution, one would expect about 2.5{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of datapoints at either end of the curve. For more details on that topic read any basic book on statistics.

Applying z-score Analysis to the Ranking Algorithm

Table 5. z-scores for 10 coldest (left) and 10 hottest winters (right). z-scores at -2 or lower

              highlighted in blue, while those at +2 or more in red

We can have summarised table 6 in the following way: Out of 161 winters, six winters can be described as unusually or extremely cold (1963, 1895, 1979, 1881, 1879 and 1947), while only one can be described as unusually or extremely hot, 1846. Two of those winters, 1963 and 1895 were over 3 standard deviations away from the mean and making them extra unusual.

Table 6. Distribution of z-scores from the mean for the Armagh tmax winters data

In terms of overall distribution, 95.7{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (154/161) of winters can be labelled as normal, 3.7{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (6/161) extremely cold and 0.6{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} (1/161) extremely hot. In other words, between 1844 and 2004 the unusually cold winters outrank unusually hot winters 6:1!

The most important part of evaluating quality of any algorithm is to check whether the results could be validated and applied to near-by geographical regions, i.e. do temperature patterns observed in Armagh, Norther Ireland, reflect similar temperature patterns in UK:

Figure 14. British Isles and Armagh Observatory in Northern Ireland (red)

The thing to notice is that the British Isles are a relatively small geographical unit surrounded by huge masses of water that serve as a natural thermostat and explain the rather mild climate that is observed over the British Isles.

So, what is known about winters of 1895 and 1963, the two most unusually cold winters in Armagh?

Winter of 1894/1895

Below is a summary of the quotes from different sources (archived newspaper reports or church records) but with similar conclusions:

The winter of 1894-95 was severe for the British Isles and one of the coldest since the Little Ice Ages of 1650. The river Thames was frozen for the last time with numerous skating festivals organised on The Serpentine Lake in London’s Hyde Park and the Thames itself!

The coldest temperatures on record were reported at -13C at Loughborough, -22C at Rutland, -24C at Buxton and -27C at Braemer.

Winter of 1962/1963

The winter of 1962–1963 (also known as the Big Freeze of 1963) was one of the coldest winters on record in the United Kingdom. Temperatures plummeted, and lakes and rivers began to freeze over. On 29–30 December 1962 a blizzard swept across the South West of England and Wales. Snow drifted to over 20 feet (6.1 m) deep in places, driven on by gale force easterly winds, blocking roads and railways. January 1963 was the coldest month of the twentieth century, indeed the coldest since January 1814, with an average temperature of −2.1°C. Much of England and Wales was snow-covered throughout the month. The country started to freeze solid, with temperatures as low as −19.4 °C at Achany in Sutherland on the 11th. Freezing fog was a hazard for most of the country.

In January 1963 the sea froze for 1 mile (1.6 km) out from shore at Herne Bay, Kent, 4 miles out to sea from Dunkirk, and BBC television news expressed a fear that the Strait of Dover would freeze across.

What about the hottest winter of 1846?

Here is a very interesting story that happened after I sent the copy of my paper to the Armagh Observatory as a courtesy for using their datasets. In my paper I discussed only the coldest winters but not the hottest one, the winter of 1846. A week after I sent the paper to the Armagh Observatory I received an email from John Butler MBE who is Emeritus Professor there, sending me two of their papers that were about famine in Ireland that occurred in 1846 due to the unusually hot winter of 1846 and I quote:

“Dear Dr Butina,

Your finding that the winter months of January and February for 1846 were exceptionally warm is of particular interest for our background knowledge of meteorological conditions leading up to and covering the outbreak of potato blight in 1846/47 which resulted in the Great Famine and lead to mass emigration from Ireland in the second half of the 19th century. In this connection, you may be interested to know that our relative humidity series, published a few years ago in the Int. J. Climatology, shows a distinct peak in the average RH at that time.”

When two different methodologies, mine using daily temperatures and theirs using relative humidity, merge and come to the same conclusion highlights the importance of using different scientific approaches that are based on solid scientific methodologies.

Please note that all the data I use can be downloaded from the Armagh Observatory site and all the numbers I quote can be reproduced by anyone trained in the art of numerical analysis of complex data by following the algorithm’s steps in the original paper.

For more details about author see http://www.l4patterns.com/uploads/2015-new-algorithm-db.pdf

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via