Written by Joseph E Postma
In the last post was an explanation of the difference between energy and energy flux. Energy is generally a simple static scalar quantity, while flux refers to an instantaneous expenditure of energy.
Physics, i.e. the real world and real-world reactions, occur in real-time. Reality doesn’t wait around for an average of something to build up and then decide to act – reality acts as time flows by, each infinitesimal moment to the next. Reality reacts to instantaneous flux, not the average flux because there is no “average” that reality waits around for to react to.
The standard procedure for “conserving energy” and then creating an energy budget and subsequent greenhouse effect is by numerically equating the terrestrial flux output with the solar flux input. This numerical procedure is done with the justification that “on average, the input and output must equal if the system is in equilibrium”. But this is done numerically on paper, not physically in reality, because the physics of reality reacts instantaneously to forces, and doesn’t wait around for averages.
So what’s the basic thing that we’re actually trying to conserve in regards to solar input and terrestrial output? The real physical quantity we want to conserve is energy, not flux. Energy is a fundamental unit of physics, while flux always depends upon the particular, real-time, local situation. So if we assume that, on average, the input and output energies are equal, which they should be, then we can consider such energies for any particular second. Considering any particular second is convenient since this allows us to directly convert the energy into flux later on.
In any given second, the Earth absorbs 1.22 x 1017 Joules of light energy from the Sun. This is calculated with the Stefan-Boltzmann equation for the Sun, factored for the distance to the Earth and the Earth’s cross-sectional area, and its albedo.
In any given second, this energy, 1.22 x 1017 Joules, falls on one-side of the planet – the day-side hemisphere. So, now that we know the total energy falling on the Earth in one second, and we now also know where the energy falls in one second, we can convert the energy value into the units of the Stefan-Boltzmann equation, which are Joules per second per square meter. Therefore if we take the total energy and divide it by the surface area of a hemisphere of the Earth, you get an (linear) average of 480 Joules per second per square meter, or 480 W/m2. Using the Stefan-Boltzmann equation which equates flux to temperature, this is a temperature of +30 degrees C, which is very nice and warm and will melt ice into water on the day-side, etc. It is a reasonable number.
However, we must again recall that reality reacts to forces instantaneously, and not to averages of those forces after-the-fact. The light energy falling on the day-side hemisphere in one second is not evenly (linearly) distributed because the Earth is round, not flat. That means that there is a locality-dependence on the true, real-time value of the flux density. That is, when the Sun is overhead it is strongest, and when it is near sunrise or sunset it is weakest, and in between it smoothly ranges. When the Sun is directly overhead, and even barely so, the flux density of the energy falling isn’t strong enough to just melt ice into water, but it is also strong enough to evaporate water into vapour. This is what basically creates everything we recognize as the climate, is water vapour rising into the atmosphere from the strength of the Sun, and this occurs in real-time. The greenhouse effect models do not show this, and they actually even contradict it, because they incorrectly average the power of the Sun to where it doesn’t physically exist, and thereby make the solar power far too cold (on paper) to be able to create that water cycle and climate.
Back to Equating Flux
With an energy input of 1.22 x 1017 Joules over a hemisphere in one second from the Sun, and an energy output from the Earth of 1.22 x 1017 Joules from the entire globe, i.e. both hemisphere’s, it is not physically correct to equate these energy values in terms of flux. These values are true and totally correct in terms of energy. They can not be made to be equal in terms of flux.
For example, if we say that the Earth is in numerical flux equilibrium with the Sun, and mistake this for conserving energy, then that would mean that the Earth must emit the same flux of energy as it receives the Sun. Therefore the Earth must emit 480 W/m2 on average since that is what it receives from the Sun on an instantaneous basis.
Well, the Earth does not emit this flux of energy. That is way too high of value. If you converted that value into total energy emitted per second over the entire globe, it would be more energy than actually comes in. The known and measured value for the flux output from the Earth is 240 W/m2.
Alternatively, we might also say that if the Earth is in numerical flux equilibrium with the Sun, then that means that the Sun must deposit the same flux of energy as the Earth emits. Therefore the Sun must deposit 240 W/m2. However, this is only -18 degrees C if you check what the temperature of this flux is with the Stefan-Boltzmann equation, and that is not nearly high enough to melt ice into water let alone evaporate water into vapour. Obviously, the problem here is that Sun does not deposit its energy over the same surface area that the Earth emits energy.
Equating Energy vs. Equating Flux
If we equate the energy coming in from the Sun to that leaving from the Earth, in the proper units of energy which is Joules, then 1.22 x 1017 Joules comes in and 1.22 x 1017 Joules goes out. There is no problem here.
If you want to know what this energy is in terms of flux, then the Solar flux deposits 480 W/m2 (and this is still a “poor” average because it is linear and ignores real-time location dependence), and the Earth emits 240 W/m2. The energy from the Sun and the Earth can not be numerically equated in terms of flux, but only in energy.
I mean, you can equate the fluxes on paper if you want, but this no longer corresponds to the real system or its physics because, basically, the procedure is physically meaningless. Reality and physics does not respond to averages, reality does not wait around for an average to be determined and then choose to react to it, reality and physics react in real-time to instantaneous inputs, and the Sun doesn’t shine evenly on both hemispheres at once.
With instantaneous inputs, the Sun creates the water cycle and drives the entire climate. It heats up the surface to very high temperature, such as you can feel scalding your feet on a nice beach on a sunny day!
With the faulty flux-averaged inputs, Sunlight is imagined to be freezing cold at only -18 degrees C. This isn’t strong enough to do anything – melt ice, sustain life, heat the beach, etc etc.
Mistakes are everywhere to be made. They can be very difficult to detect.
When starting with the problem of conserving energy input and output to the Earth, you can do it in terms of units of energy, in Joules, or you can try to do it in terms of energy flux density, in Joules per second per square meter. The problem is that averaging only works in terms of plain energy, in Joules, but not in energy flux density. If you say that, every second, 1.22 x 1017 Joules enters and leaves the Earth, and you identify where it enters and leaves the Earth, then you’re fine, you’re acknowledging reality, and the flux densities explain the processes we see in reality, such as the water cycle.
However, if you say that the flux density is equal for every second and every place, then you either have to say that the Earth emits the same flux as it receives from the Sun, which is wrong, or that the Sun deposits the same flux as the Earth emits, which is also wrong. In the former case it makes the Earth too hot (on paper), and in the latter case it makes sunshine too cold (on paper).
So the problem is that climate science went down the path of equating flux densities in their energy budgets, instead of the correct procedure of equating energy. If you equate energy you’re fine; if you equate energy flux density, you create mistakes.
When equating energy flux density, climate science thus chose the branch of making Sunshine way too cold (-18 degrees C). Such freezing cold sunshine is at odds with the observation and fact that Sunshine creates the water cycle by both melting and evaporating H2O. Instead of reconsidering whether or not the “numbers on paper” were correct, climate science chose to add some more numbers on paper called “the greenhouse effect” in order to bump up the temperature from -18 degrees C to the temperature of the air near the ground, which is +15 degrees C on average.
The interesting thing is that the Earth is actually -18 degrees C, as observed from outer space. Given the total energy input from the Sun, the Earth does have the global average temperature of -18 degrees C that is predicted when re-emitting that same energy from the Earth. The global average temperature is -18 degrees C when observed from outer space.
But -18 degrees C is NOT the input! It is only the output. The input from the Sun is NOT freezing cold. Only the output of the Earth has a Wattage of 240 W/m2 and an associated temperature of -18 degrees C. The solar input is actually really hot, maximizing at up to +121 degrees C: ouch hot!
The mistake that climate science makes is to equate the output temperature with the input temperature, which is the same thing as equating the output flux to the input flux in a rather naive attempt to conserve energy. But this goes about it the wrong way – we need to conserve energy first, and let the flux be whatever they actually are in real-time to correspond with that energy. It is wrong to start at the output flux and work backwards by equating that with the input flux, in order to conserve energy. This conserves energy but it does not conserve physics because physics depends on real-time flux, and in real-time the input flux is not the same as the output flux. The input flux only occurs over half of the area that the output flux comes from, and hence is much more dense and hence much more warm.
The IPCC model (first graphic below), and all other greenhouse effect models such as this one from Harvard (second graphic),
all start by equating the output flux with the input flux. Of course this has all the problems of violating real physics by assuming that the Earth is flat and that the Sun is farther away than it actually is, etc. etc., as has been discussed here extensively, and so an additional major catastrophe for these models is that, if you just stop to think about what they’re saying about the solar input, their solar input can’t melt ice or evaporate water or create the water cycle, or warm anything up at all, by itself. That is because these models do not conserve physics, even if they might “numerically” conserve flux. This is the difference between math and physics, and understanding this is the difference between being a physicist, and not. Many of these people who are thinking of themselves as physicists, are not, unfortunately.
With a model that conserves energy as well as physics, and allows the flux be the actual real-time values they have to be in reality, the greenhouse effect is no longer required to make the atmosphere heat itself up some more without being an actual energy source.
The Earth system is not analogous to a house being heated with a furnace running at -18 degrees C, which temperature then gets amplified to +15 degrees C by insulation or backradiation or heat trapping. In fact, your furnace in your house doesn’t even work like this. In fact, nothing anywhere works like this because it is a violation of thermodynamics. The greenhouse effect models have to violate thermodynamics because the people creating them never go back to think about whether they are conserving energy vs. flux, and whether or not they are conserving physics. These details haven’t apparently appealed to them.
By steadfastly adhering to the false idea that solar input is freezing cold at -18 degrees C, and never questioning it, and never thinking about the real-time physics, they are forced to invent “on paper” a mechanism where something cold heats itself up with its own energy. Reality simply doesn’t do this and we all know that. The input to the Earth is not -180C with the greenhouse effect amplifying it to +15 degrees C. The input to the Earth is +30 degrees C as a “linear average”, and this average is composed of a maximum of +121 degrees C, and the related flux input has enough strength to deposit an enormous amount of trapped energy in the latent heat of liquid and vaporous water, which the -180C input simply can’t do. -18 degrees C input doesn’t conserve the physics…not without making up the greenhouse effect to make up the difference to what the Sun actually does with its real-time flux.
The resulting global average temperature of the Earth is then subsequently -18 degrees C. There is no cold being amplified to hot, anywhere. The furnace in your house doesn’t work by running at -18 degrees C, then being amplified by insulation to room temperature, +22 degrees C. Imagine if things worked that way. Why run the furnace at -18 degrees C since you could engineer a better system to run at only -100 degrees C and be amplified by backradiation or heat-trapping to +100 degrees C? As a simple matter of engineering, you could use as near to zero energy input as you want, and get however high of temperature you wanted. This has never been done because it is a violation of thermodynamics, because reality doesn’t behave this way. If reality did behave this way, nothing would be stable because cold things would always be accidentally heating themselves up to destruction.
Your house furnace runs at a few thousand degrees, and the insulation in your house helps to trap the air warmed up by the fire in the furnace. If your house is drafty or has poor insulation, the warm air escapes the inside of your house easily, exchanging itself with cool air from outside. If you have a well-sealed house with good insulation, the warm air stays inside your house for a longer time.
The most the furnace could do is heat the air to the temperature of the fire inside it, and no insulation will make this temperature higher, because the temperature of the fire is determined by the flux density of the energy release from the combustion of natural gas. You can’t make this combustive release of energy hotter by reflecting or trapping its own energy on itself – the maximum temperature is determined by the source supply of energy, coming from a chemical reaction (natural gas oxidation).
Likewise, the solar energy input on the Earth’s surface (and also some energy goes directly into the atmosphere) induces a temperature of up to +121 degrees C, and there is no way to trap or reflect this temperature to make itself hotter, as is claimed by the greenhouse effect. We checked to see if the solar energy was being trapped or back-radiated by the atmosphere in such a way as to induce a higher temperature, in the real-world with empirical observational data, using the predictions of the greenhouse effect math. The data showed that the greenhouse effect math was wrong. Of course, we already knew this, but we did the due-diligence of performing the appropriate observational experiment just to make sure. We already know the greenhouse effect math is wrong, because nowhere do we have something running at -100 degrees C being amplified to +100 degrees C – this would be a perpetual motion machine.
Now be clear here, the greenhouse effect is premised upon a -18C solar input being amplified to +15 degrees C by the atmosphere trapping or back-radiating heat. None of this is correct because 1) the solar input is not -18 degrees C, 2) reality doesn’t function like this in any case – heat can’t amplify its own temperature.
At this point is where many greenhouse effect advocates will switch their argument to the input being “the surface of the Sun”, which has a surface temperature of 6000 degrees C. Well, the reference to the Sun’s surface temperature of 6000 degrees C is also not a correct relevance to the Earth, because at the distance the Earth is from the Sun (the Earth is not on the Sun’s surface!), the power of sunlight has been reduced to +121 degrees C. The only way to make sunlight warmer than this again is to re-condense the sunlight with a mirror or magnifying glass, and, the atmosphere doesn’t do this. Well, I don’t think it does, but if it did, this would not be a greenhouse effect, but a natural phenomenon. Any “back-radiation” or “trapped radiation” doesn’t have the same spectrum as sunlight, it just has the thermal spectrum of its local Earth-bound temperature, and so it can’t add to or re-condense the solar input. By extension, any ”back-radiation” or “trapped radiation” has a source in its own local temperature, and equal temperatures don’t add together to make hotter temperatures.
But that’s fine – so the solar input at the Earth is +121 degrees C, not 6000 degrees C, and definitely not -18 degrees C. The argument then goes on to say that the insulation from the greenhouse effect makes the surface closer to being +121 degrees C. Please realize that at this point we are entirely outside of the bounds of the actual greenhouse effect and what it is supposed to do – amplify -18 degrees C to +15 degrees C. Greenhouse effect advocates who “go here” are contradicting their original position entirely, and have thus lost any credibility. Now we’re talking about hot sunlight heating the surface to high temperature all by itself, with insulation promoting even higher temperatures, closer to +121 degrees C. Of course, what is left out of this argument is any estimation for the temperature that the hot Sunshine is heating the ground to in the first place. If it is already heating the ground to high temperature, are we sure there is any requirement to heat it further, as the improperly flux-averaged models require? That needs to be determined first before we can say whether or not an additional heating is present from the GHE. Besides which, if the solar input is +121 degrees C, it seems like there is a lot more cooling than heating going on! Also, using the correct math and physics to average this input over the hemisphere results in +30C, which is still much warmer than the average global temperature. And finally, in the paper linked above, we used the correct physical input of +121 degrees C in the correct real-time thermal equations, and predicted the temperature of the surface both with and without an additional heating from a greenhouse effect: the observational data showed that there was no additional greenhouse effect heating on top of the solar input.
You may notice that we run into so many problems of ambiguity at the basis of climate science. For example the ambiguity of whether they are conserving energy or energy flux, and the ambiguity of what happens if you take the incorrect approach. But one of the biggest ones is this idea of a “global average temperature” or “global average surface temperature“. When climate science refers to “global average” temperature, it is actually referring to a numerical value of +15C which corresponds to the average near-surface air temperature at an altitude of 1.5m above ground (not above sea-level) from a bunch of temperature stations randomly distributed around the planet. So this isn’t a surface temperature and isn’t global.
The true “global average temperature” is that measured from outer space, and this has a value of -18 degrees C. We do not actually know what the average surface temperature is because we simply don’t have thermometers placed on the actual ground surface. If we did, climate science would see the Sun heating the surface to very high temperature, as we did in that linked paper above, and then they would have to abandon the flux-averaged models that create a greenhouse effect because they assume that the Sunshine is too cold to heat the ground by itself.
So the only thing that climate science actually knows is the average near-surface air temperature, which isn’t all that useful. It would be much better at least to have the actual physical surface temperature.
But the “near-surface air temperature” is +15 degrees C, and this is different from the “global average temperature” of -180C. Well, this isn’t a big deal. The real global average temperature is measured from outer space, and so it includes all of the atmosphere above 1.5 meters altitude, which the near-surface air temperature thermometers never measure. All of the air above the 1.5 meter measuring stations has temperature which steadily decreases to about -70 degrees C at an altitude of 15 km (which is ten-thousand times higher than 1.5 meters!). Above this altitude hardly any molecules are left and non-ideal gas and plasma effects cause the temperature to increase again, but because the gas is so rarefied it contains a negligible amount of heat.
Now, the average of this system, the whole atmosphere and surface included together, is what should have the global average temperature of -18 degrees C, and it does, because the temperature goes from warm (+15 degrees C) at the bottom of the atmosphere, to cold (-70 degrees C) at the top of the atmosphere, and measured from space it is -18 degrees C on average. The bottom of the atmosphere is warmest because that is where the Sun does the real-time physical heating in the first place. The temperature decreases with altitude because the heat dissipates outwards from the ground and cools down, and also because of the natural adiabatic gradient. The amount of water vapour in the air-column reduces the adiabatic gradient and it also reduces night-time cooling, because of latent heat which is trapped and then released by the H2O molecules. -And to be sure, those H2O molecules wouldn’t be able to get into the air-column if sunshine was freezing cold at -18 degrees C!- Finally, the natural existence of the adiabatic gradient (that is, the fact that the air has to cool with altitude), means that the bottom of the atmosphere must be warmer than top – and therefore the bottom must also be warmer than the average. If the average has to be -18 degrees C, then this average has to be found between the bottom (warmest) and the top (coldest), and the bottom will naturally be warmer than that average. It’s all totally sensible when you do the physics correctly, and no greenhouse effect is needed to be invented.
Nowhere in the universe does an input of -18 degrees C become +15 degrees C, by having -18 degrees C combine with itself or “trap” itself.
The above is an extract from the article,’The Fraud of the AGHE Part 18: Conserving Wattage does not Conserve Physics – Rant Free Version’ on Joe Postma’s blog, climateofsophistry.com