• Home
  • History of Encounters with the Sky Dragon

History of Encounters with the Sky Dragon


The First Skirmish – A Blow Against Prudery.

 My first encounter with the Sky Dragon occurred in the French Alps, but the first blow in that encounter was not mine but my wife’s! It was at a NATO-sponsored meeting on coal combustion held in 1986 at Les Arcs. My wife and I and three colleagues from MIT and their very proper wives, were congregated at the swimming pool of the hotel where the meeting was being held. We were chatting about this and that when another colleague from Australia arrived to join us. Shortly thereafter, his girlfriend appeared “aux seins nus”; that is, bare breasted in a topless bathing suit. She proceeded to dive into the pool and swim. We men pretended not to notice how well endowed she was as she swam backstroke before us, but the proper Bostonian wives were shocked. Chatting among themselves, they proceeded to roundly condemn the young Australian lady for her scandalous behavior. My wife and I listened to all the chatter. I sat quietly without saying a word not daring to suggest that it didn’t bother me at all. My MIT colleagues did likewise, but my wife had heard enough. She proceeded to the ladies room and reappeared shortly, herself in a topless condition, and joined the young Australian lady in the pool both swimming bare breasted.

Two things happened that evening at dinner. First, my Australian colleague got up (you know how unpretentious those Australians are) and proposed a toast to my wife for her exceptionally well endowed swimming performance at the pool earlier that day. Secondly, one of my MIT colleagues who had witnessed it all, was so impressed that he solicited my opinion on the subject of greenhouse warming of the atmosphere by human CO2 emission. He was on an NAS committee considering the question and had read a paper of mine presented at the Combustion Symposium at MIT. I had used the infrared emission from the 4.2 micron band of CO2 to measure methane explosion temperatures in a 12 ft. diameter sphere. He also apparently knew that I had once served a Meteorologist while on active duty with the U. S. Navy. Now just being asked for an opinion by someone from MIT is a great honor.

I responded that although CO2 was an essential ingredient for the photosynthesis that supports almost all life on Earth, I doubted that such a minor constituent of the atmosphere could have a significant effect on the radiative balance between the Sun and the Earth. I also suggested that the overall role of the atmospheric “greenhouse effect” could be checked by comparing the Earth’s average surface temperature with that of the Moon. It receives essentially the same input radiance from the Sun but has no atmosphere.

Scouting the Enemy

In 1989, at a Symposium at Chatham College in Pittsburgh (formerly the Pennsylvania College for Women, Rachel Carson’s alma mater), a paper was presented describing a model in which greenhouse gas induced temperature changes in the atmosphere were driving the Earth’s ocean circulation. I had to heckle the speaker with the obvious fact that he had it “back asswards”. Meteorologists know from the El Nino phenomenon, the moderate temperatures in Western Europe caused by the Gulf Stream, the development and motions of Hurricanes and Typhoons, and the periodic Summer Monsoons in Asia and elsewhere, that it is the other way around; namely, that it is the distribution of land and ocean and the ocean currents that drive the atmospheric circulation. Clearly the model being presented had the “tail wagging the dog”. In the same symposium, I had a brief discussion with a distinguished atmospheric scientist who during his presentation had repeated the standard mantra that the atmosphere of Venus was hot because of a “greenhouse effect” that was caused by its high CO2 content. When I asked him whether he had corrected for the adiabatic compression caused by its high surface pressure, he responded that that was only a small correction factor. I left the Symposium in disbelief: something was terribly wrong.

A short time later, I had a similar discussion with the then President of the Combustion Institute, who repeated that same mantra about the temperature of Venus. He informed me that he was on an NAS panel considering the global warming issue. When I asked him whether he had considered the effect of Venus’ closer distance to the Sun, and the effect of adiabatic compression in its very dense atmosphere, I got a rather blank stare. While he was a rather distinguished chemist, the conversation convinced me that I was better qualified than he was to be on that panel. After all, temperatures in regions below sea level such as Death Valley and the Dead Sea are higher than in surrounding areas at sea level because of adiabatic compression, and of course, those higher temperatures have absolutely nothing to do with the CO2 content of our atmosphere.

Attacking the Sky Dragon – Defeat

Shortly thereafter, a colleague from New Zealand who had worked in our laboratory during his sabbatical, contacted me to solicit my opinion on the subject. After much discussion between us, and after I “retired”, we decided to cooperate on a poster-session paper that was presented at the Twenty-Fifth International Symposium on Combustion in 1994 (1). The analysis showed that atmospheric water vapor played the dominant role in infrared absorption, and that any “greenhouse runaway” for the Earth’s temperature should therefore already have occurred long before the last century’s increase in atmospheric CO2. With the ocean’s water vapor flux increasing exponentially with temperature, the resultant increase in cloud cover albedo would naturally limit or “buffer” the system in a negative feedback.

The paper also challenged the two “Greenhouse Catechisms”. The first catechism argues that the in the absence of the “greenhouse effect”, the Earth’s temperature would be too cold for human habitation (about –25 C). It is argued that it is the atmosphere that “keeps the heat in”. That sets us up for the argument that too much greenhouse from too much CO2 will make the Earth too hot for human habitation. This first catechism will be referred to in a later figure as the “Cold Earth Fallacy”, and it is based on the erroneous assumption that the earth’s surface and all the other entities involved in its radiative losses to free space all have unit emissivity. The second catechism has already been discussed: the contention that Venus’ high surface temperature is caused by the greenhouse effect of its CO2 atmosphere.

As fear mongering hysteria about human caused global warming grew, and as the Kyoto protocol was promulgated, I felt compelled to get our analysis published more widely. I wrote to Bert Bolin, then head of the IPCC, and submitted our paper to Nature and Science, but they refused to publish it. Who were we to challenge all those sophisticated computer models that were predicting catastrophic warming as a result of human CO2 emission?

After some correspondence with the editor of Nature, and when it became clear that they were not interested in publishing the results of our analysis, I felt compelled to candidly express my opinions on the entire question. Here, in a condensed form, is the content of my last letter to the editor of Nature in October, 1994.

( Begin quote)“ I have just reviewed the two articles you referenced…..The article by C…is an excellent survey of the complexities involved in the hydrogeological cycle…its emphasis on the necessity of obtaining more data…is certainly something with which I agree…Our analysis is certainly consistent with his survey, but our analysis also offers the simplest of models…..the radiative equilibrium perspective. I plead guilty to simplicity….the largest mass and heat capacity in the hydrogeological cycle is in the oceanic component of that cycle, and if one applies Kirchoff’s law to the system, the ocean is in radiative equilibrium with the solar irradiance. The details of the composition of the dry atmosphere are thus of little account in the overall balance since the law is valid for any composition. At its equilibrium temperature, water can accumulate in its deep ocean storage realm to provide a long term “memory” of that equilibrium condition.

“The atmosphere is not driven by the short-term ‘forcing function’ of absorption within the atmosphere’s relatively trivial mass, but rather by the long-term ‘forcing function’ of the memory of the accumulated radiative equilibrium that resides in the ocean. In the intermediate term, the atmosphere is driven by variations in ocean dynamics in accordance with the El Nino phenomenon (i.e. the Pacific Decadal Oscillation). In the longer term, it is driven by variations in solar irradiance associated with variations in the Earth’s orbital motion about the Sun in accordance with Milankovitch. (I clearly neglected to include the variations in the solar cycles and how they might influence cloudiness). The current ‘greenhouse models’ such as those referred to in the W….& R….. article have it ‘backasswards’: they drive the oceans with the atmosphere, which is an absurd notion that is contradicted by everything we know about long range weather forecasting!

“When I first read your comment that ‘Model validation using existing observational data is a fairly standard procedure’, my initial reaction was: hurrah, at last someone has made an honest attempt to validate their model. But the euphoria lasted only as long as it took me to read the article in question by W….& R….. .

There is nothing in that paper that deals with model verification!

There is absolutely nothing in that article that compares the standard greenhouse ‘radiation forcing’ ‘scenarios’ or ‘projections’ with data. The article contains all the standard ‘politically correct’ projections that have appeared over and over again in the literature……..

“The situation is far worse with the greenhouse modelers…

“The current spate of greenhouse models is motivated in part by the same desire for publication, by the perceived need to create new departments in Universities that will deal with this critical problem of ‘global weather change’, and by the politics of the environmental movement which encourages the projection of catastrophies……

My pleas to Nature clearly fell on deaf ears.

But the final defeat came when I was even rejected by my own Unitarian Universalist Association. They were on the way to adopting a resolution on global warming. I tried to present the skeptics position at their General Assembly in Long Beach several years ago, but was not allowed to do so even though I knew more about the subject than anyone else there. I was told that it was “settled science” and what they wanted was action to curb greenhouse gas emissions. In a workshop at a more recent General Assembly in Salt Lake City, I rose from the audience to present the skeptics viewpoint. But as soon as it became clear that I opposed their position, someone jumped up immediately and grabbed the microphone away from me. No one in the audience defended my right to present my arguments. So much for the fourth principle of the Unitarian Universalist Association: “A free and responsible search for truth and meaning”.

Counterattack as Reinforcements Arrive

In 2001, my wife and I took a Nation magazine cruise along the west coast of Mexico. One of the featured speakers during the cruise was their columnist Alexander Cockburn, who is also the co-editor of the magazine, Counterpunch. I sensed from some of his comments that he had serious reservations about the theory of human caused global warming. I spoke to him after one of his talks, indicating that I was a scientist who had been studying the question for several years. He indicated an interest in the result of my studies, so I sent him copies of my 1994 paper, my several letters to the editor, and other correspondence. After a hiatus of about six years, and out of a clear blue sky, he called me on the telephone to inform me that he was preparing to write series of articles in the Nation magazine on the subject. I agreed to provide him with scientific advice. The articles appeared in four issues of the Nation from May 14 – June 25, 2007, with letters to the editor and his responses in the June 18 issue. The articles appeared under the intriguing titles; “Is Global Warming a Sin?”, “Who Are the Merchants of Fear ?”, “The Greenhousers Strike Back and Strike Out”, and “Dissidents Against Dogma”. After the “climategate” scandal broke, he wrote another article that appeared in the Jan 4, 2010 issue entitled “From Nicaea to Copenhagen”. Letters to the editor and responses to that article appeared in the Feb. 8 issue.

Cockburn has received vituperative criticism from environmentalists as a result of that series of articles, and I myself was accused of being a tool of the coal barons. That would be a great surprise to them since I spent most of my career advocating for more stringent safety regulations in their mines.

Earth’s Radiative Equilibrium

I am exceedingly grateful to Cockburn for his series of articles about global warming and for the discussions we had on the scientific issues. He is one of the few journalists who has exercised due diligence in trying to understand the science. Most others in the ‘mainstream media’ simply regurgitate the anecdotal, fear-mongering clap trap they are fed by environmental lobbyists without digging any deeper into the totality of the data available or the fundamentals of the science involved.

That interaction with Cockburn encouraged me to revisit, amplify, and update the 1994 poster session paper. The new paper was published in early 2009 in Energy and Environment (2). I am also grateful to Fred Goldberg, my friend and colleague from Sweden, who was kind enough to review that paper. Fred has been a long time skeptic who has openly challenged the IPCC’s conclusions on human caused global warming and even publicly confronted Bert Bolin on the question. Both Cockburn and Goldberg spent a week with us at our residence last year in an impromptu salon discussing the science and politics of the ‘global warming/climate change’ issue.

Fred presented a spell binding lecture on the climate history of Scandinavia to my Meteorology class at Colorado Mountain College, and even skied with us at Copper Mountain.

We now proceed to the analysis in that 2009 paper (2).

If one balances the solar input power absorbed by all the Earth’s entities involved in the radiative balance between the Earth and the Sun against the power lost by those same entities as they radiate to free space, one obtains an equation for the average equilibrium temperature of those entities, which shows that the controlling factor is the ratio of their absorptivity to the emissivity. Their absorptivity is controlled by the fraction of the Sun’s radiation that is reflected back to space, which is the Earth’s albedo, and is determined mostly by its cloud cover. A high albedo means a low absorptivity, and a low albedo gives a high asorptivity.

Fig. 1. The calculated average temperature of all the Earth's entities involved in the radiative equilibrium with the Sun and free space. The temperature is in degrees Celcius, as a function of the entities' average emissivity for various values of the Earth's albedo.

Fig. 1. The calculated average temperature of all the Earth’s entities involved in the radiative equilibrium with the Sun and free space. The temperature is in degrees Celcius, as a function of the entities’ average emissivity for various values of the Earth’s albedo.

Fig. 2. The calculated average temperature increase in degrees Celcius of all the Earth's entities involved in the radiative equilibrium with the Sun and free space. The increases are calculated for an initial albedo value of 0.3 and are shown as a function of albedo decrease or equivalent absorptivity increase.

Fig. 2. The calculated average temperature increase in degrees Celcius of all the Earth’s entities involved in the radiative equilibrium with the Sun and free space. The increases are calculated for an initial albedo value of 0.3 and are shown as a function of albedo decrease or equivalent absorptivity increase.

Taking the logarithms of the equation for the equilibrium temperature, and taking differentials of the result, allows one to calculate the change in the average temperature of those entities associated with various changes in their emissivity, absorptivity, or the albedo. That sensitivity curve is plotted in Figure 2 for the current average atmospheric temperature of 291 K, and for an average albedo of 0.30.

It is at this point that it must be acknowledged that there is considerable uncertainty in determining what “entities” on the earth are involved in its radiative equilibrium with the Sun and free space. The solar input radiation is absorbed both heterogeneously and homogeneously: heterogeneously at the tops of clouds and at the Earth surface, and homogeneously by the gaseous components of the atmosphere. The same distribution of those absorbers are emitters of the flux that is radiated from the Earth to free space.

Those entities are distributed vertically throughout the atmosphere: from the ocean surfaces at sea level, to the mountains at high altitude, to continental depressions below sea level, and to the upper reaches of the atmosphere at the tops of clouds, and to other particulates suspended in the atmosphere. Those same entities are distributed longitudinally and latitudinally from the equator to the poles. With what measured temperatures are the calculated ones to be compared? Is it reasonable to expect that the calculated temperatures should be compared only with the air temperatures measured near the Earth’s topographic surface? How representative is such an average surface air temperature of the temperature of the entire mass of the atmosphere involved in the radiative equilibrium process? If the near surface air temperature is not representative, is it realistically possible to measure the average temperature of the entire mass of absorbing and emitting entities with sufficient accuracy to make a meaningful comparison between the data and predictions? One is asking for a definition of the mass of matter that constitutes the Earth’s surface, atmosphere, and oceans. How high in altitude should one go in the atmosphere to include it all? Similarly, how deep in the liquid fluid of the oceans should one go in order to include the mass below the ocean surface that influences the heat and mass transport processes near the ocean surface and in the atmosphere above it? How representative are those near surface temperatures of the average temperature of those vertically distributed yet poorly defined entities? As difficult as these questions may be, they are nevertheless the ones that need to be answered in order to evaluate the validity of any models purporting to predict future conditions. This is a formidable task; however, looking at the problem in depth, it may be more realistic to conclude that its resolution may be unattainable given our limited understanding of the complex processes involved, and the lack of data available for the current thermodynamic state of those entities.

Nevertheless, despite those complexities, we will continue this analysis by making the not unreasonable assumption that any changes in the average temperature of those entities, will be reflected in similar changes in the average atmospheric temperature near the Earth’s surface, as measured by the meteorological network of surface stations or from satellite observations. Those measured temperature changes as reported by the IPCC over the last century (3) are as follows:

         1910 – 1940, an increase of 0.5 C

         1940 – 1970, a decrease of 0.2 C

         1970 – 2000, an increase of 0.5 C

As can be seen from Fig. 2, those increases of 0.5 C for the two thirty year spans from 1920 to 1940 and from 1970 to 2000, correspond to a relatively small decrease of only 1.5 percent in the Earth’s albedo. The observed decrease in temperature of 0.2 C from 1940 to 1970 corresponds to an albedo increase of only 0.5 percent.

Thus those modest changes in temperature are readily explained in terms of minor changes in albedo, brought about by small changes in cloudiness. Svensmark (4,5) has shown that the Earth’s cloud cover underwent a modulation in phase with the cosmic ray flux during the last solar cycle. His suggested mechanism for that correlation involves a decrease in cosmic ray flux during high solar activity, when the “solar wind” and magnetic activity shield the Earth from cosmic rays. The reduced incidence of cosmic rays results in the absence of adequate nucleating agents for cloud formation, a decrease in the Earth’s albedo, a corresponding increase in absorptivity, and hence a heating of the Earth. The opposite occurs during low solar activity, when the cosmic ray flux into the Earth’s atmosphere is high, nucleating agents are plentiful, and cloudiness increases the albedo. This results in a decrease in absorptivity, and hence a cooling of the Earth. The analysis summarized earlier from Fig 2 supports the Svensmark mechanism as the causes of the 20th Century fluctuations in the average Earth temperature. As Fig. 2 shows, relatively modest changes of only a few percent in the Earth’s albedo are sufficient to account for the observed temperature changes of that Century. Those are precisely the magnitudes of the changes in cloudiness that are observed by Svensmark to vary in phase with the variations in solar activity.

Thus, except for the influence of cloud albedo, no assumptions are needed regarding the detailed composition of the atmosphere in order to explain the observed modest variations in 20th Century temperatures of the Earth’s atmosphere. This analysis supports the earlier conclusion (1) that it implausible to expect that small changes in the concentration of any minor atmospheric constituent such as carbon dioxide can significantly influence the radiative equilibrium between the Sun, the Earth, and free space.

Puff, the Magic Sky Dragon is gone

At the present time, global warming skeptics/realists/deniers fall into two camps. The first camp believes that the greenhouse gas warming phenomenon is real but that the degree of warming from the recent increases in atmospheric CO2 concentrations, is trivial. The second camp denies the very existence of the greenhouse effect arguing that it is totally devoid of physical reality, and that as traditionally defined, it violates the laws of thermodynamics.

We here attempt to resolve the question by idealizing the radiative transport processes between the Earth’s surface, its atmosphere, and free space, in the absence of any solar input radiation.

As indicated earlier, the problem of obtaining accurate absorptivity to emissivity ratios for all the entities on the Earth that participate in the radiative balance, is a formidable task. It is highly unlikely that any proposed model contains a realistic ratio for the entire globe over a long enough time scale. But even if those quantities were precisely known, the resultant temperature structure of the system of entities cannot be determined until all other energy transfer processes and forces are included in the model. Those other processes involve conduction, natural convection, forced convection (advection to meteorologists) in both the atmosphere and the oceans, endothermic evaporation from the oceans and land, exothermic condensation of water vapor in the atmosphere, and their accompanying mass transport processes, and finally, the intractable problem of turbulence. To those processes must be added the buoyancy force couple, the Coriolis force, and the tidal forces. Thus, even if the radiative processes were precisely known, all the other processes just cited would have to be included in order to predict the temperature structure of all the Earth’s entities. The complexity of the problem boggles the mind and has frustrated forecasting meteorologist for decades.

But, instead, let us consider reversing the process. What can be learned from using the known thermal structure of the Earth’s surface and its atmosphere, and then inferring the radiative transport processes that must accompany that structure? This analysis is taken from a paper entitled “The Nightime Radiative Transport Between the Earth’s Surface, Its Atmosphere, and Free Space” that has recently been submitted for publication in Energy and Environment. The analysis reflects the radiative fluxes for nighttime conditions but they also are also present during daytime conditions when they must be subtracted from the input solar fluxes in order to obtain the net amount that heats the Earth.

The Earth’s surface, its atmosphere, and free space, are approximated as concentric spherical surfaces whose radii are much larger than the distance between them and whose average temperatures, emissivities, and absorptivities are known. The Earth’s surface entities are taken to be at its average temperature, its average emissivity and its average absorptivity. The gaseous atmosphere without clouds to begin with, is approximated as a partially absorbing, partially transparent, non-reflective glass-like plate at a colder average temperature with its average absorptivity and its average emissivity.

The gaseous atmosphere is condensed into a thin glass plate whose average temperature is taken as the temperature of the “Standard Atmosphere” half way up at the 500 mb surface. When all is said and done one obtains the following result. The net amount of infrared radiation absorbed by the colder atmosphere above from that emitted by the warmer atmosphere below is 25 W/m2. The infrared radiation lost to free space from the atmosphere is 46 W/m2. The infrared radiation lost from the Earth’s surface to free space that is transmitted through the atmosphere is 228 W/m2 .

Thus it is clear that the atmosphere helps to cool the Earth –atmosphere system, and that in the absence of clouds, it accounts for some 17% of the radiant energy flux that the system as a whole loses to free space.

The general correctness of this picture is clearly confirmed by the fact that direct meteorological soundings of the atmospheric lapse rate show that both the Earth’s surface and the atmosphere both cool during night-time hours, albeit at different rates because of their different emissivities.

It should be noted that nowhere in this balance is there a so-called “greenhouse effect” in which the atmosphere supplies any net radiant energy that is absorbed by the Earth. Under these assumptions for the thermal structure, the flow of radiant energy from both the earth’s surface and its atmosphere is entirely outward toward free space.

In the presence of clouds covering on the average some 33 % of the Earth’s surface, the “glass plate” atmosphere becomes partially reflective. For that cloudy atmosphere, the radiation from the atmosphere to free space increases to about 106 W/m2 and the radiation lost from the surface to free space is decreased to

153 W/m2. With clouds, the atmosphere now accounts for some

41 % of the total radiant flux lost to free space. The physical effect of that radiant loss from clouds to free space is apparent from the fact that thunderstorm activity tends to maximize after sundown because of radiation from the tops of clouds. That radiation loss results in marked cooling of those cloud tops, which steepens the temperature lapse rate, increasing the instability of the cloudy atmosphere, thus increasing thunderstorm activity. As was the case for the cloudless atmosphere, for the cloudy atmosphere, the so called “greenhouse effect is nowhere to be found in the radiative balance. All the radiant flux is outward toward free space.

There is only one exception in which one can find a net radiant flux from the atmosphere to the Earth’s surface, and that occurs during atmospheric inversion conditions. But even in the extreme case in which the surface temperature and the atmosphere’s temperature are reversed, the radiant power lost to free space from the atmosphere is a factor of five greater than the power radiated toward the surface from the warmer atmosphere. Inversion conditions are thus the only case in which the so-called “greenhouse effect” can possibly have any form of physical reality. But, of course, that is not how the greenhouse effect is traditionally defined by global warming modelers.

Such inversion conditions, however, are present over a small fraction of the Earth’s surface for limited periods of time, and since the recent increases in atmospheric CO2 concentrations have virtually no effect on the atmosphere’s total emissivity, the effect of those CO2 increases on the overall radiative flux balance is essentially nil.

The Legend of the Sky Dragon and Its Mythmakers

There is a simple way to tell the difference between propagandists and scientists. If scientists have a theory they search diligently for data that might actually contradict the theory so that they can fully test its validity or refine it. Propagandists, on the other hand, carefully select only the data that might agree with their theory and dutifully ignore any data that disagrees with it.

One of the best examples of the contrast between propagandists and scientists comes from the way the human caused global warming advocates handle the Vostok ice core data from Antarctica (6). The data span the last 420,000 years, and they show some four Glacial Coolings with average temperatures some 6 to 8 C below current values and five Interglacial Warming periods with temperatures some 2 to 4 C above current values. The last warming period in the data is the current one that started some 15,000 to 20,000 years ago. The data show a remarkably good correlation between long term variations in temperature and atmospheric CO2 concentrations. Atmospheric CO2 concentrations are at a minimum during the end of Glacial Coolings when temperatures are at a minimum. Atmospheric CO2 concentrations are at a maximum when temperatures are at a maximum at the end of Interglacial Warmings. Gore, in his movie and his book, “An Inconvenient Truth”, shows the Vostok data, and uses it to argue that the data prove that high atmospheric CO2 concentrations cause global warming.

Is that an objective evaluation of the Vostok data? Let’s look at what Gore failed to mention. First, the correlation between temperature and CO2 has been going on for about half a million years, long before any significant human production of CO2, which began only about 150 years ago. Thus, it is reasonable to argue that the current increase in CO2 during our current Interglacial Warming, which has been going on for the last 15,000 – 20,000 years, is merely the continuation of a natural process that has nothing whatever to do with human activity. Gore also fails to ask the most logical question: where did all that CO2 come from during those past warming periods when the human production of CO2 was virtually nonexistent? The answer is apparent to knowledgeable scientists: from the same place that the current increase is coming from, from the oceans. The amount of CO2 dissolved in the oceans is some 50 times greater than the amount in the atmosphere. As oceans warm for whatever reason, some of their dissolved CO2 is emitted into the atmosphere, just as your soda pop goes flat and loses its dissolved CO2 as it warms to room temperature even as you pour it into the warmer glass. As oceans cool, CO2 from the atmosphere dissolves back into the oceans, just as soda pop is made by injecting CO2 into cold water.

But the real “clincher” that separates the scientists from the propagandists comes from the most significant fact that Gore fails to mention. The same Vostok data show that changes in temperature always precede the changes in atmospheric CO2 by about 500-1500 years.

The temperature increases or decreases come first, and it is only after 500-1500 years that the CO2 follows. Fig 3 shows the data from the termination of the last Glacial Cooling (Major Glaciation) that ended some 15,000 – 20,000 years ago through the current Interglacial Warming of today. The four instances where the temperature changes precede the CO2 curve are clearly shown. All the Vostok data going back some 420,000 years show exactly the same behavior. Any objective scientist looking at that data would conclude that it is the warming that is causing the CO2 increases, not the other way around as Gore claimed.

It is even more revealing to see how the advocates of the human-caused global warming theory handle this “clincher” of the argument. It is generally agreed that the Vostok cycles of Glacial Coolings and Interglacial Warmings are driven by changes in the parameters of the Earth’s orbital motion about the Sun and its orientation with respect to that orbit; namely, changes in the ellipticity of its orbit, changes in its obliquity (tilt relative to its orbital plane), and the precession of its axis of rotation. These changes are referred to as the Milankovitch cycles, and even the human caused global warming advocates agree that those cycles “trigger” the temperature variations. But the human caused global warming advocates present the following ad hoc contrivance to justify their greenhouse effect theory. The Milankovitch cycles, they say, are “weak” forcings that start the process of Interglacial Warming, but once the oceans begin to release some of their CO2 after 500-1500 years, then the “strong” forcing of “greenhouse warming” takes over to accelerate the warming. That argument is the best example of how propagandists carefully select data that agrees with their theory as they dutifully ignore data that disagrees with it. One need not go any further than to the next Glacial Cooling to expose that fraudulent argument for the artificial contrivance that it really is. Pray tell us then, we slayers of the Sky Dragon ask, what causes the next Glacial Cooling? How can it possibly begin when the CO2 concentration, their “strong” forcing, is at its maximum? How can the “weak” Milankovitch cooling effect possibly overcome that “strong” forcing of the greenhouse effect heating when the CO2 concentration is still at its maximum value at the peak of the Interglacial Warming? The global warmers thus find themselves stuck way our on a limb with that contrived argument. They are stuck there in an everlasting Glacial Warming, with no way to begin the next Glacial Cooling that the data show.

But one has to be sorry for Gore and his friends, for after all, they are in the global warming business. Global cooling is clearly someone else’s job!

In my 1994 paper, it was concluded that the unverified models used by the IPCC did not realistically represent the forces that determine the temperature of the Earth and its atmosphere, and that it would be absurd to base public policy decisions on them. Regrettably, what was then merely “absurd” has today turned into something more sinister. Models have been developed that try to validate the existence of an intensifying ‘greenhouse effect’ driven by very modest changes in the concentration of the minor constituent, CO2, even though its absorption of the Earth’s infrared radiation emitted to free space is already near saturation. Those models continue to be developed even though as shown here, and as shown much earlier, the ‘greenhouse effect’ has long been known to be devoid of physical reality (7, 8). When those models were criticized for their omission of clouds, the modelers included water vapor, but in the form of a positive feed-back. That way the models could magnify the trivial effect of increasing CO2 concentrations, and thus “tweak” them in the direction the modelers wanted them to go. In doing so, they ignored the overwhelming evidence that water vapor feedback in the form of clouds, is negative. Even after their models were shown to be faulty, they continued to use them to make predictions, which were then touted as the equivalent of actual data, and public policy decisions were then made, and continue to be made, on the basis of those models.

Overall, such disingenuous behavior, and the acceptance of such behavior by some Scientific Journals, Professional Societies, and Government Agencies, both national and international, essentially amounts to scientific malfeasance on a grand scale. The implementation of policies based on the acceptance of such malfeasance will continue to have damaging effect to both science and the public welfare.


1.    Hertzberg, M. and J. B. Stott 1994, “Greenhouse Warming of the Atmosphere: Constraints on Its Magnitude, 25th International Symposium on Combustion. Abstracts of Work in Progress Poster Session Presentation, The Combustion Institute, p. 459. (The complete Poster Session publication is available on request from the author of this publication)

2.    Hertzberg, M. 2009, “Earth’s Radiative Equilibrium in the Solar Irradiance”, Energy and Environment, Vol 20, No. q, pp 83-93. Or at www.icecap.us/images/uploads/EE20-1_Hertzberg.pdf .

3.    Klyashtorin, L. B. and A. A. Lyubushin 2003 “On the Coherence Between Dynamics of World fuel consumption and Global Temperature Anomaly”, Energy and Environment,     Vol. 14, No. 6 pp 773 – 782, Fig. 1. Also at the National Climate Data Center, Global Surface Temperature Anomalies, 2007, on the web at www.ncdc.noaa.gov/oa/climate/research/anomalies/anomalies.html

4.    Svensmark, H. 2000, “Cosmic Rays and the Earth’s Climate”, Space Science Reviews, Vol 93, pp 155-166

5.    Svensmark, H. and N. Calder 2007, “The Chilling Stars: A New Theory of Climate Change”, Icon Books Ltd., Cambridge, 249 pp.

6.    Petit, J. R. et al 1999, “Climate and Atmospheric History of the Past 420,000 Years from the Vostok Ice Core, Antarctica”, Nature, Vol 399, pp 429-436

7.    Wood, R. W. 1909, “Note on the Theory of the Greenhouse”, Philosophical Magazine, Vol. 17, pp 219-320

8.    Gerlich, G. and R. D. Tscheuschner 2009, “Falsification of the Atmospheric CO2 Greenhouse Effects Within the Frame of Physics”, International Journal of Modern Physics B, Vol. 23, No. 3, pp 275-364. Available at http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1164v4.pdf