Connect with us

C2C Journal

A Planet that Might not Need Saving: Can CO2 Even Drive Global Temperature?

Published

45 minute read

By Jim Mason
Climate change has ingrained itself so deeply in the public consciousness that it’s likely an article of faith that the foundational science was all conducted, tested and confirmed decades ago. Surely to goodness, exactly how carbon dioxide (CO2) behaves in the atmosphere is “settled science”, right? That more CO2 emitted equals more heat and higher temperature is a cornerstone of the ruling scientific paradigm. And yet, finds Jim Mason, the detailed dynamics of how, when and to what degree CO2 transforms radiation into atmospheric heat are anything but settled. So much remains unknown that recent academic research inquiring whether CO2 at its current atmospheric concentrations can even absorb more heat amounts to breaking new ground in climate science. If it can’t, notes Mason, then further changes in CO2 levels not only won’t drive “global boiling”, they won’t have any impact on climate at all.

Electric Vehicles (EVs) – by which are usually meant battery-operated electric vehicles, or BEVs – have long been touted in many countries as the central element in the strategy for stopping global climate change. Canada is no exception. The Liberal government’s Minister of Environment and Climate Change, Stephen Guilbeault, has mandated that by 2030, 60 percent of all vehicles sold in Canada must be BEVs, rising to 100 percent by 2035. In anticipation of the accompanying surge in BEV sales, the federal and Ontario governments have offered huge subsidies to battery manufacturing companies. But now, EV sales are stagnating and automobile manufacturers that were rushing headlong into EV production have dramatically scaled back their plans. Ford Motor Company has even decided that, instead of converting its Oakville, Ontario plant to EV production, it will retool it to produce the Super Duty models of its best-selling – and internal combustion engine-powered – pickup truck line.

Heating up the rhetoric: “The era of global warming has ended; the era of global boiling has arrived,” UN Secretary-General Antonio Guterres (top left) has declared; prominent voices such as former U.S. Vice President Al Gore (top right) insist it’s “settled science” that “humans are to blame” for global warming; this view has been accepted by millions worldwide (bottom). (Sources of photos: (top left) UNclimatechange, licensed under CC BY-NC-SA 2.0; (top right) World Economic Forum, licensed under CC BY-NC-SA 2.0; (bottom) Takver from Australia, licensed under CC BY-SA 2.0)

A big part of the justification for forcing Canadians into EVs has been that we must “follow the science.” Namely, the “settled” science which holds that the planet’s atmosphere is heating dangerously and that humans are the cause of this via our prodigious emissions of heat-trapping gases – mainly carbon dioxide (CO2) – which are magnifying the atmosphere’s “greenhouse” effect. Over the past several decades the accompanying political rhetoric has also heated up, from initial concerns over global warming and a threatened planet – terms that at least accommodated political and scientific debate – to categorical declarations of a “climate emergency”. As UN Secretary-General Antonio Guterres asserted last year, “The era of global warming has ended; the era of global boiling has arrived.”

The foundational term “follow the science” is loaded, however. It is code for “follow the science disseminated by the UN’s Intergovernmental Panel on Climate Change (IPCC).” Article 1 of the UN’s Framework Convention on Climate Change actually defines climate change as “a change of climate which is attributed directly or indirectly to human activity”. Elsewhere the document clearly identities CO2 emitted through the burning of fossil fuels as the causative human activity. So the UN and IPCC long ago limited the scope of what is presented to the public as a scientific investigation and decided not only on the cause of the problem but also the nature of the solution, namely radically driving down “anthropogenic emissions of carbon dioxide and other greenhouse gases.”

The worldwide climate change movement has proved remarkably successful in creating what is known as a “ruling paradigm”. This phenomenon is common in many fields and not necessarily harmful. But in this instance, what is billed as a scientific endeavour has strictly limited the role of open scientific inquiry. The “science” that the movement wants humanity to follow is the result of inferential, inductive interpretation of empirical observations, declared to be “settled science” on the basis of a claimed consensus rather than as a result of controlled, repeatable experiments designed not to reinforce the paradigm but to test it for falsifiability, in accordance with the scientific method. This paradigm has allowed Guterres, for example, to claim that “for scientists, it is unequivocal – humans are to blame.” But it is missing (or attempts to exclude) a key element: rigorous experimentation that subjects the theory on which the paradigm is built to disinterested scientific scrutiny.

Following whose science? The UN’s Intergovernmental Panel on Climate Change (IPCC) defines climate change as being solely “attributed directly or indirectly to human activity,” particularly the burning of fossil fuels, thus limiting not only the scope of public discourse but pertinent scientific inquiry as well. (Sources: (left photo) Robin Utrecht/abacapress.com; (middle photo) Qiu Chen/Xinhua/abacapress.com; (graph) IPCC, 2023: Summary for Policymakers, Climate Change 2023: Synthesis Report)

Thankfully, some scientists still are conducting this kind of research and, for those who value and follow properly-done science, the results can be eye-opening. Two recent scientific papers appear of particular interest in this regard. Each is aimed at assessing the role of CO2 in influencing atmospheric temperature. Nobody doubts whether carbon dioxide is a greenhouse gas; the real questions are how much additional radiant energy CO2 is currently absorbing (such as due to the burning of fossil fuels) compared to the past, and what impact this has on the climate.

One might have thought such work would have been done 30 or more years ago but, apparently, it has not. That additional CO2 emitted into the atmosphere absorbs additional radiant energy, and that such additions do so in a linear fashion, are two of climate change theory’s critical premises. So it would seem crucial to establish scientifically whether gases emitted due to human activity – like CO2 – are capable of raising the Earth’s atmospheric temperature and, accordingly, justify their assigned role of causative agent in humanity’s planet-threatening villainy. The papers discussed below thus deal with questions of profound importance to climate change theory and the policy response of countries around the world.

If CO2 is not actually an effective current driver of atmospheric temperature, the implications are staggering.

How – and How Much – CO2 Traps Heat in the Atmosphere

The first paper developed a mathematically rigorous theory from first principles regarding the absorption of long-wavelength radiation (LWR) by a column of air as the concentration of CO2 (or other greenhouse gases such as water vapour, methane, ozone or nitrous oxide) increases in the atmosphere.

The Earth receives solar energy mainly in shorter wavelengths, including visible light. According to NASA, just under half of this incident radiation reaches the ground, where it is absorbed and transformed into heat. Of that half, just over one-third is radiated back into the atmosphere; about one-third of that amount is absorbed by heat-trapping gases, including CO2. (The air’s main constituents of oxygen and nitrogen are essentially transparent to both incoming visible radiation and outgoing LWR). Importantly, CO2 can only absorb meaningful amounts of LWR in two specific bands, but in these bands, it can absorb all of it.

The greenhouse effect, oversimplified and distorted: This seemingly easy-to-understand info-graphic downplays the fact that a large proportion of incoming solar radiation returns to space, omits the key fact that there would be no life on Earth without the natural greenhouse effect, and leaves out the most significant greenhouse gas of all: water vapour. (Source of image: EDUCBA)

The paper employs formulae whose explanation exceeds the scope of this article, but in simplified terms the theory predicts that at some concentration – designated as “C” – one-half of the LWR in the absorbable band is absorbed. Importantly, the theory postulates that the absorption of LWR does not increase in a linear fashion along with the increase in atmospheric CO2. Instead, as the gas concentration increases, the incremental amount absorbed decreases. At twice the C value – 2C – only three-quarters of the incident radiation would be absorbed. And at 3C, seven-eighths.

By 10C, 99.9 percent of the absorbable LWR is being absorbed. In effect, the atmosphere has become “saturated” with the gas from a radiation-absorption perspective and further increases in CO2 have negligible effect on absorption. This relationship is illustrated in Figure 1. As one can see, it is distinctly non-linear in nature, but is instead exponential, asymptotically approaching 100 percent absorption.

Figure 1: Theoretical graphical depiction of absorption of incident radiation as a function of the concentration of an absorbing gas, in this case forming the core of a theory concerning how long-wave radiation emitted from the Earth’s surface is absorbed by atmospheric CO2. (Source of graph: Jim Mason)

This graph could appear quite scary at first glance. After all, as more CO2 is added to the atmosphere, more LWR is being absorbed, suggesting more heat is being retained instead of escaping to outer space. So doesn’t the Figure 1 curve suggest that, with CO2 concentrations rising and potentially doubling during our era compared to pre-Industrial times, global warming is indeed occurring, CO2 is indeed the cause, and the IPCC’s warnings are justified? The answers depend on what the value of C actually is for CO2 and what the concentration of CO2 is in the atmosphere today. In other words, on where the Earth now sits along that curve, and where it sat when the pre-Industrial era came to an end. That, in turn, will require actual physical experiments – and these are covered later.

Figure 2: View from end of air column showing random positions of radiation-absorbing gas molecules, with red circles representing their associated radiation-absorbing cross-section. (Source of illustration: Jim Mason)

The non-linear LWR/absorption relationship can be understood conceptually as follows. Each physical COmolecule has what amounts to a surrounding area of radiation-absorption capability to specific bands of LWR. This area is “opaque” to those bands. The LWR rising from the Earth’s surface is absorbed if it travels onto this area; outside of it, it is not. The areas can be thought of as little opaque spheres around each molecule, which when viewed externally look like circles. The area of these circles is referred to as the radiation absorption cross-section.

Viewed from the end of the column of air, the circular cross-sections formed by all the CO2 molecules in the air column will effectively add up to some overall fraction of the air column’s cross-sectional area becoming opaque to the LWR. Radiation that strikes any of that area will be absorbed; radiation travelling through the rest of the column’s cross-sectional area will pass into space.

At some concentration of molecules – dubbed C in this essay, as mentioned – half of the column’s cross-section will be opaque and absorbing the incident LWR. This is illustrated in Figure 2. It is of relevance that because the gas molecules are randomly present in a column of air, when viewed from the end they will overlap; the overlapping areas cannot absorb the same radiation twice. C is the concentration at which the effective opaque area, taking into account all the overlapping, is one-half the column’s cross-sectional area.

If the gas concentration is then increased by C, i.e. is doubled, the new molecules will also have an associated opaque area equal to half of the column’s cross-sectional area. Half of this, however, will coincide with the half that is already opaque, so will have no impact. The other half, or one-quarter of the column’s cross-section, will become newly opaque and start absorbing LWR. If the concentration is again increased by C, the new molecules will also have a total opaque area equal to one-half the column cross-section, but three-quarters of this will coincide with already-opaque area so only one-quarter of that one-half, or one-eighth in total, will become new radiation-absorbing opacity.

Figure 3: Illustrative depiction of radiation-absorption cross-section illustrating how the transparent area, where additional molecules would be exposed to the radiant heat source and, therefore, would absorb radiation, is progressively reduced as more molecules are added; after several more iterations, this leads to radiation absorption “saturation” after which no further radiation is absorbed no matter how many more absorbing molecules are added, since all radiation in that wavelength band is already being absorbed. (Source of illustrations: Jim Mason)

This progression is illustrated in Figure 3, but is perhaps more easily visualized in Figure 4. Here, the half of the cross-sectional area of air column that was rendered opaque by the CO2 molecules is shown as being all on one side of the column. The opacity caused by each successive addition of C number of CO2 molecules is shown in a different colour and is positioned to highlight the impact on the remaining transparent cross-sectional area. As can be seen, each successive increase in concentration of C increases the amount of radiation absorption by decreasing amounts – by a factor of two. After ten such increases, the transparent fraction of the column would be reduced to 0.1 percent of its area so that 99.9 percent of the incident radiation is being absorbed.

Although the foregoing description is conceptually correct, a full understanding of natural processes and of the theory requires taking several other considerations into account, most of which are outside the scope of this discussion. One aspect that is important to understand: as mentioned above, CO2 and other greenhouse gases only absorb outgoing radiation efficiently in particular regions (bands) of the electromagnetic spectrum; they absorb little or none in other bands and are therefore “transparent” to any radiation in those bands. The above discussion applies to the regions of the spectrum where the gas can absorb radiant energy at 100 percent.

Figure 4: Alternative depiction of the reduction in incremental radiation-absorbing area as the absorbing gas concentration is increased in multiples of the concentration that absorbs 50 percent of the incident radiation. As in Figure 3, successive sets of molecules are indicated by red, orange and yellow with another set added, induced in green, while blue represents the remaining transparent area. (Source of illustration: Jim Mason)

Another aspect – which becomes important in the following section – is that the Earth’s surface temperature varies by location, weather, time of year and time of day. This will affect how much radiant energy goes out in various wavelengths in various places, and the absorbing gas’s absorption capacity. While the theory holds that this does not alter the basic non-linearity of absorption nor the “saturation” phenomenon, it could alter the point at which “C”, or 50 percent absorption, is reached – something that can be tested through experimentation.

Net of all this is that if the theoretical formulation is correct, one would expect to see a curve similar to Figure 1 for any individual greenhouse gas – or combination of gases – and a radiant energy source of any temperature, with the curve’s specific shape depending on the gas and/or mixture of gases and the temperature of the radiant energy source. From such a curve, it would be possible to determine the concentration at which the gas is absorbing 50 percent of the maximum energy that it will absorb when its concentration is increased to a very large value.

The paper that develops this theoretical formulation is entitled Dependence of Earth’s Thermal Radiation on Five Most Abundant Greenhouse Gases and was co-authored in June 2020 by William A. van Wijngaarden and William Happer. It is highly technical and would be very difficult for anyone without a strong background in mathematics and science, ideally physics, to understand. Accordingly, the accompanying figures in this article that illustrate the paper’s key ideas in a format accessible to the layperson were produced by me, using information derived from the paper’s figures and text.

Van Wijngaarden is a professor in the Department of Physics and Astronomy at York University in Toronto with a more-than 40-year academic track record and nearly 300 academic papers to his credit, while Happer is Professor Emeritus of Physics at Princeton University in New Jersey who had a 50-year-long academic career and nearly 200 papers to his credit. Both authors also have numerous related academic achievements, awards and organizational memberships, and have mentored hundreds of graduate students. Happer happens to be an open skeptic of the IPCC/UN climate change “consensus”, while Van Wijngaarden has testified in a court case that the IPCC’s climate models “systematically overstate global warming”, an assertion that is incontrovertibly true.

Although their paper was not peer-reviewed and, to date, has not been published in a major academic journal, and although both scientists have endured smears in news and social media as climate skeptics or “deniers”, there has not been any known attempt to refute their theory following publication, such as by identifying errors in the logic or mathematics of their theoretical formulation. Accordingly, their paper is in my opinion an example of good science: a coherent theory aimed at explaining a known phenomenon using rigorous scientific and mathematical principles and formulae, plus supporting evidence. It is also, critically, one that can be subjected to physical experimentation, i.e., is disprovable, as we shall soon see.

Running hot: William A. van Wijngaarden (top left) and William Happer (top right), two highly credentialed physicists with outstanding academic track records, are among scientists who are openly critical of the IPCC’s accepted climate models which, as 40 years of temperature observations have clearly shown, “systematically overstate global warming”. (Source of bottom graph: CEI.org)

This opinion is supported by the fact that the same phenomenon of non-linearity and absorption saturation, along with an associated equation referred to as the Beer-Lambert Law, is discussed by Thayer Watkins, a mathematician and physicist, and professor emeritus of economics at San José State University. “In order to properly understand the greenhouse effect one must take into account the nonlinearity of the effect of increased concentration of greenhouse gases,” Watkins notes. “The source of the nonlinearity may be thought of in terms of a saturation of the absorption capacity of the atmosphere in particular frequency bands.”

Subjecting the LWR Absorption Theory to Experimentation – Or, Science the Way it Should be Done

The second paper was published in March of this year and reports on experiments conducted to test van Wijngaarden and Happer’s theory, in accordance with the standard scientific method, using several different greenhouse gases. If the experiments were properly designed to realistically duplicate natural processes and if they then generated results inconsistent with the theory, then the van Wijngaarden/Happer theory could be considered disproved. If the experiments produced results consistent with the theory, the theory would not be proved but would increase in plausibility and justify further experimentation.

The experimental setup is depicted in Figure 5. It was designed to allow the concentration of COwithin a column of gas (in kilograms per square metre of column area, or kg/m2) to be varied in a controlled way and to measure the fraction of the incident radiation that is absorbed at any concentration. The “column” of gas was contained within a cylinder comprised of a 1-metre length of 150 mm diameter PVC pipe, with polyethylene windows at either end to allow ingress and egress of the radiation. CO2 concentration was changed by injecting measured amounts of the gas via a central valve. Water valves on the cylinder bottom were used to allow an identical volume of gas to escape, thereby maintaining the pressure in the cell. (The background gases into which the CO2 was mixed are unimportant since these remained constant, with only the COconcentration varied.)

Figure 5: Diagram of the laboratory setup for measuring the absorption of thermal radiation in CO2. (Source of illustration: Climatic consequences of the process of saturation of radiation absorption in gases, Figure 7)

The radiation source was a glass vessel with a flat side containing oil maintained at a constant temperature. Adjacent to the flat side was a copper plate with a graphite surface facing the gas cell. This ensured that the radiant source, as seen by the cell, was uniform in temperature over the cross-section of the cell and had the radiation profile of a black body at the chosen temperature. The selected temperatures of 78.6°C and 109.5°C were, states the paper, “chosen randomly but in a manner that allowed reliable measurement of radiation intensity and demonstrated the influence of temperature on the saturation mass value.”

Results for CO2 are illustrated in Figure 6, which is taken directly from the paper. The two selected temperatures are separately graphed. Figure 6 clearly shows experimental curves that are qualitatively the same as the theoretical curve in Figure 1 derived from van Wijngaarden/Happer’s paper and the equation noted in Watkins’ website discussion. From the graph it is possible to determine that the concentration of CO2 that results in absorption of 50 percent of the absorbable radiation – the value of C introduced earlier – is about 0.04 kg/m2 for a LWR temperature of 78.6 °C (the one that is closer to the actual average temperature of the Earth’s surface, which NASA lists as 15 °C).

Figure 6: Absorption of incident radiation versus concentration of CO2, with concentration expressed as a weight per cross-sectional area of atmospheric column (kg/m2), using two experimental LWR temperatures. Absorption is effectively measured as the fraction of the total incident radiation that is absorbed in the test column, which is determined by comparing it to an identical test column that maintains the zero-point concentration throughout. The reason that A saturates at less than 1 is because there are many wavelengths in the incident radiation that CO2 does not absorb, which pass through the column regardless of the CO2 concentration, with only the other wavelengths being absorbed. (Source of graph and mathematical formula: Climatic consequences of the process of saturation of radiation absorption in gases, Figure 8)

As the paper notes, the chosen temperatures, while higher than the Earth’s mean surface temperature, facilitate reliable measurements of the radiation intensities and clearly show the effect of temperature on the saturation mass value or, equivalently, the value of C. Specifically, the graphs clearly show that the value of C decreases as the temperature of the radiant source decreases (although with only two points, the nature of the relationship cannot be reliably determined). The implications are discussed in the following section.

This experimental paper is entitled Climatic consequences of the process of saturation of radiation absorption in gases and was co-authored by Jan Kubicki, Krzysztof Kopczyński and Jarosław Młyńczak. It was published in the journal Applications in Engineering Science and cites copious sources, though it does not appear to have been peer-reviewed. Kubicki is an assistant professor in the Institute of Optoelectronics in the Military University of Technology in Warsaw, Poland. Kopczyński appears to be a colleague at the same institution specializing in the atmospheric distribution of aerosols, while Młyńczak is an adjunct professor at the same institution. All three have authored or co-authored a number of scientific papers.

Is CO2 Even Capable of Driving Global Temperatures Higher?

According to a reputable atmospheric tracking website, on September 2, 2024 the Earth’s atmospheric CO2concentration was 422.78 parts per million (ppm). Each ppm worldwide equates to a total atmospheric weight of 7.82 gigatonnes (Gt). The cited concentration therefore amounts to 3,300 Gt of CO2 in the Earth’s atmosphere. Since the Earth’s surface area is 5.1 x 1014 m2, assuming a uniform CO2 distribution, this concentration can be translated into the units used in the above-cited experiment as 6.48 kg/m2 across the Earth’s surface.

This figure might appear at first glance to be a misprint, as 6.48 kg/m2 is approximately 160 times the CO2 C value of 0.04 kg/m2 – the concentration that absorbs 50 percent of the incident LWR. Six-point-six times the C value – the level that absorbs 99 percent of the incident LWR – is still only 0.264 kg/m2. Beyond this, further increases in gas concentration have no impact on absorption or, hence, on temperature. The Earth’s current concentration of CO2 is, accordingly, over 24 times as high as what is needed to achieve the 99 percent absorption value established by experimentation.

Long past the point of change? The Earth’s currently estimated CO2 concentration of 422.78 parts per million (ppm) is 27 times the estimated CO2 saturation level; even in the pre-Industrial era, CO2 concentrations were more than 12 times that level, suggesting the current rise in CO2 concentration is incapable of driving global temperature. (Source of graph: climate.gov)

The implications of this are quite staggering. According to climate.gov, the CO2 concentration in the pre-Industrial era was 280 ppm and prior to that it oscillated between about 180 ppm and 280 ppm. This means that even the pre-Industrial CO2 concentrations were between 64 and 100 times the C value, as well as being more than 10 times the concentrations needed to reach 99 percent absorption. The CO2 concentration, then, was saturated multiple times over with respect to LWR absorption. A glance back at Figure 2 once again makes it clear that at neither of these COconcentration ranges (covering present times and the pre-Industrial era) were the changes to CO2 concentration capable of having any substantive impact on the amount of LWR being absorbed or, consequently, on atmospheric temperatures – let alone the Earth’s whole climate.

Further, they probably never did. According to the published paper Geocarb III: A Revised Model of Atmospheric CO2 Over Phanerozoic Time, the COconcentration has never been less than 180 ppm during the so-called Phanerozoic Eon, which is the entire time during which all the rock layers in the geological column, from the Cambrian upwards, were deposited. So there has never been any point during this significant span of Earth’s history when the concentration of CO2 in the atmosphere was not “saturated” from a LWR absorption perspective. Consequently, throughout that entire period, if the new theory and recent experimentation are correct, changes in COconcentration have been incapable of having any discernible impact on the amount of LWR absorbed by the atmosphere – or, accordingly, on the global climate.

It’s true that increasing CO2 concentration could be capable of driving global atmospheric temperature higher – but only if it began at vastly lower concentrations than exist at present or at any known previous time. If such a time ever existed, it is long past. At the current order of magnitude in its concentration, CO2 simply appears not to be a factor. If further experimentation also generates results consistent with the van Wijngaarden/Happer theory, it would appear that COis incapable of having any impact on atmospheric temperature at all. It cannot, accordingly, be the primary source of “global warming” or “climate change”, let alone of a “climate emergency” or “global boiling”.

While experimental results consistent with a theory do not prove the theory to be true, and replication of the initial results by the three Polish researchers would be very desirable, the experimental results to date are certainly inconsistent with the current ruling paradigm that CO2 emissions from the burning of fossil fuels are the cause of current climate change and, indeed, that the effect of each increase in concentration is accelerating. According to the rules of decision-making in science, unless the experiment can be shown to be mal-designed or fraudulent, the inconsistency between experiment and theory proves, scientifically, that the current paradigm is false.

The incidence and recession of the Ice Age (top), the Medieval Warm Period (bottom left) and the more recent Little Ice Age (bottom right) are just a few examples of global temperature fluctuations that happened independently of the current era’s burning of fossil fuels or increasing CO2 levels. Shown at top, northern mammoths exhibit at the American Museum of Natural History’s Hall of North American Mammals; bottom left, peasants working on the fields next to the Medieval Louvre Castle, from The Very Rich Hours of the Duke of Berry, circa 1410; bottom right, Enjoying the Ice, by Hendrick Avercamp, circa 1615-1620. (Source of top photo: wallyg, licensed under CC BY-NC-ND 2.0)

Moreover, the new theory and experimental result are consistent with numerous empirical observations that are also inconsistent with the ruling IPCC/climate movement paradigm. Examples abound: the occurrence and recession of the Ice Age, the appearance and disappearance of the Roman and Medieval Warm Periods and the Little Ice Age – all without any CO2 from the burning of fossil fuels – the steady decline in atmospheric temperatures from 1940 to 1975 while CO2 levels steadily increased, and the relative flattening of atmospheric temperatures since about 2000 despite CO2 levels continuing to increase. But if the level of CO2 in the atmosphere has long been above “saturation”, then its variations have no real impact on the climate – as these observations indicate – and something else must have caused these climate variations.

To this can be added the failed predictions of the temperature models and of the dire consequences of not preventing further CO2 increases – such as the polar bears going extinct, the Arctic being free of ice, or Manhattan being covered by water, to list just a few. But again, if the atmosphere has long been CO2-saturated with respect to LWR absorption, then the additional CO2 will have no effect on the climate, which is what the failure of the predictions also indicates.

These results, at the very least, ought to give Climaggedonites pause, although probably they won’t. For the rest of us, they strongly suggest that EVs are a solution in search of a problem. It may be that the technology has a place. The alleged simplicity ought to have spinoff advantages, although the alleged spontaneous combustibility might offset these, and the alleged financial benefit might be simply the consequence of government subsidies. Left to its own devices, without government ideological distortions, the free marketplace would sort all this out.

Climate scare à la carte. (Source of screenshots: CEI.org)

More importantly, these results ought to cause politicians to re-examine their climate-related polices. As van Wijngaarden and Happer put it in their paper, “At current concentrations, the forcings from all greenhouse gases are saturated. The saturations of the abundant greenhouse gases H2O and CO2 are so extreme that the per-molecule forcing is attenuated by four orders of magnitude…” The term “forcings” refers to a complicated concept but, at bottom, signifies the ability (or lack) to influence the Earth’s energy balance. The words “forcings…are saturated” could be restated crudely in layperson’s terms as, “CO2 is impotent.”

Kubicki, Kopczyński and Młyńczak are even more blunt. “The presented material shows that despite the fact that the majority of publications attempt to depict a catastrophic future for our planet due to the anthropogenic increase in CO2 and its impact on Earth’s climate, the shown facts raise serious doubts about this influence,” the three Polish co-authors write in their experimental paper. “In science, especially in the natural sciences, we should strive to present a true picture of reality, primarily through empirical knowledge.”

If, indeed, the CO2 concentration in the Earth’s atmosphere is well beyond the level where increases are causing additional LWR to be absorbed and, as a consequence, changing the climate, then all government policies intended to reduce/eliminate CO2 emissions in order to stop climate change are just as effective as King Canute’s efforts to stop the tides. The only difference being that Canute was aware of the futility.

Jim Mason holds a BSc in engineering physics and a PhD in experimental nuclear physics. His doctoral research and much of his career involved extensive analysis of “noisy” data to extract useful information, which was then further analyzed to identify meaningful relationships indicative of underlying causes. He is currently retired and living near Lakefield, Ontario.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

C2C Journal

Wisdom of Our Elders: The Contempt for Memory in Canadian Indigenous Policy

Published on

By Peter Best

What do children owe their parents? Love, honour and respect are a good start. But what about parents who were once political figures – does the younger generation owe a duty of care to the beliefs of their forebears?

Two recent cases in Canada highlight the inter-generational conflict at play in Canada over Indigenous politics. One concerns Prime Minister Mark Carney and his father Robert. The other, a recent book on the life of noted aboriginal thinker William Wuttunee edited by his daughter Wanda. In each case, the current generation has let its ancestors down – and left all of Canada worse off.

William Wuttunee was born in 1928 in a one-room log cabin on a reserve in Saskatchewan, where he endured a childhood of poverty and hardship. Education was his release, and he went on to become the first aboriginal to practice law in Western Canada; he also served as the inaugural president of the National Indian Council in 1961.

Wuttunee rose to prominence with his controversial 1971 book Ruffled Feathers, that argued for an end to Canadian’s Indian Reserve system, which he believed trapped his people in poverty and despair. He dreamed of a Canada where Indigenous people lived side-by-side all other Canadians and enjoyed the same rights and benefits.

Such an argument for true racial equality put Wuttunee at odds with the illiberal elite of Canada’s native community, who still believe in a segregated, race-based relationship between Indigenous people and the rest of Canada. For telling truth to power, Wuttunee was ostracized from the native political community and banned from his own reserve. He died in 2015.

This year, William’s daughter Wanda had the opportunity to rectify the past mistreatment of her father. In the new book Still Ruffling Feathers – Let Us Put Our Minds Together, Wanda, an academic at the University of Manitoba, and several other contributors claim to “fearlessly engage” with her father’s ideas. Unfortunately, the authors mostly seek to bury, rather than
praise, Wuttunee’s vision of one Canada for all.

Wanda claims her father’s desire for a treaty-free, reserve-free Canada would be problematic today because it would have required giving up all the financial and legal goodies that have since been showered upon Indigenous groups. But there is a counterfactual to consider. What if Indigenous Canadians had simply enjoyed the same incremental gains in income, health and other social indicators as the rest of the country during this time?

Ample evidence on the massive and longstanding gap between native and non-native Canadians across a wide variety of socio-economic indicators suggest that integration would have been the better bet. The life expectancy for Indigenous Albertans, for example, is a shocking 19 years shorter than for a non-native Albertans. William Wuttunee was right all along about the damage done by the reserve system. And yet nearly all of the contributors to Wanda’s new book refuse to admit this fact.

The other current example concerns Robert Carney, who had a long and distinguished career in aboriginal education. When the future prime minister was a young boy, Robert was the principal of a Catholic day school in Fort Smith, Northwest Territories; he later became a government administrator and a professor of education. What he experienced throughout his
lifetime led the elder Carney to become an outspoken defender of Canada’s now-controversial residential schools.

When the 1996 Royal Commission on Aboriginal Peoples (RCAP) attacked the legacy of residential schools, Carney penned a sharp critique. He pointed out that the schools were not jails despite frequent claims that students were there against their will; in fact, parents had to sign an application form to enroll their children in a residential school. Carney also bristled at
the lack of context in the RCAP report, noting that the schools performed a key social welfare function in caring for “sick, dying, abandoned and orphaned children.”

In the midst of the 2025 federal election campaign, Mark Carney was asked if he agreed with his father’s positive take on residential schools. “I love my father, but I don’t share those views,” he answered. Some Indigenous activists have subsequently accused Robert Carney of residential school “denialism” and “complicity” in the alleged horrors of Canada’s colonial education system.

Like Wanda Wuttunee, Mark Carney let his father down by distancing himself from his legacy for reasons of political expediency. He had an opportunity to offer Canadians a courageous and fact-based perspective on a subject of great current public interest by drawing upon his intimate connection with an expert in the field. Instead, Mark Carney caved to the
requirements of groupthink. As a result, his father now stands accused of complicity in a phony genocide.

As for William Wuttunee, he wanted all Canadians – native and non-native alike – to be free from political constraints. He rejected racial segregation, discrimination and identity politics in all forms. And yet in “honouring” his life’s work, his daughter misrepresents his legacy by sidestepping the core truths of his central belief.

No one doubts that Wanda Wuttunee and Mark Carney each loved their dads, as any son or daughter should. And there is no requirement that a younger generation must accept without question whatever their parents thought. But in the case of Wuttunee and Carney, both offspring have deliberately chosen to tarnish their fathers’ legacies in obedience to a poisonous
ideology that promotes the entirely un-Canadian ideal of permanent racial segregation and inequity. And all of Canada is the poorer for it.

Peter Best is a retired lawyer living in Sudbury, Ontario. The original, longer version of this story first appeared in C2CJournal.ca.

Continue Reading

Artificial Intelligence

The Emptiness Inside: Why Large Language Models Can’t Think – and Never Will

Published on

This is a special preview article from the:

By Gleb Lisikh

Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even
better, can independently reason their way to knowing.

This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs
“hallucinations”.

But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and
we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.”

Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more
convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even less fidelity to fact.

Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!”

This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims).

LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions.

Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture.

The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness.

Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s  Founding Fathers as a lineup of “racialized” gentlemen.

Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking:

 Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity;
 Causal AI: learns cause-and-effect relationships and can answer “what if” questions;
 Neuro-symbolic AI: combines neural prediction with logical reasoning; and
 Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error.

Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot.

LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents.

The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will.

The original, full-length version of this article was recently published as Part I of a two-part series in C2C Journal. Part II can be read here.

Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.

Continue Reading

Trending

X