Text and slides of the 35 min lecture:

World Energy Consumption and Resources:

An Outlook for the Rest of the Century

and the Role of Thermodynamic Research

delivered by Gian Paolo Beretta, Università di Brescia, Italy

on Tuesday, Nov.3, 2009, 12:15pm

in KJL2, Kjelhuset, Gløshaugen, NTNU, Trondheim, Norway

hosted by the Gas Technology Centre NTNU-SINTEF



Thank you. And I want to thank the Gas Technology Centre NTNU-SINTEF for having made my visit here in Trondheim possible. My collaboration with professors Kjelstrup and Bedeaux has been so far very productive. Our discussions made us understand better some of the general principles of nonequilibrium thermodynamics that I hope you will see at the end of talk are very much essential for the future of energy technologies.

So, let me start with this initial slide which sets the unit of measure of energy that is best suited for the purposes of our discussion today: the ton of oil equivalent. The toe. That is, the average heating value of one metric ton of oil, which is about 7.3 barrels or 12000 kWh. The current global yearly consumption of primary energy is about 12 billion toes. The average per-capita consumption is 3.6 toes per year in Europe, while it is still 7.2 in North America, and the world average is 1.8.



The outline of the talk is as follows. I will first review historical data on past consumption of primary energy, together with some social and economic data and considerations useful for an outlook. I will then discuss a plausible scenario about demographic growth, energy needs, and mix of primary resources for the rest of the century. We will compare this scenario with data on currently proved and presumed energy reserves on our planet, to decide whether we are really running out of fuel as media and politicians keep saying. Next, we use the scenario to infer how much carbon dioxide will be released by primary energy consumption, and discuss what impact this may have on global warming.

I will then discuss the role and importance of research in thermodynamics, and I will conclude with some provocative statements to spark up the discussion.



Let's start with the global energy consumption over the last 160 years. I chose to start from 1850 because that’s when the word entropy was introduced by Clausius. Today, the global demand of about 12 billion toes per year is covered for 78% by fossil fuels (33% oil, black in the figure, 21% natural gas, red; 24% coal, gray), 5.5% by nuclear fuels (violet), 5.5% by hydro (blue), while the remaining 11% are non-commercial biomasses (green), like wood, hay and other forage which in rural-economies are still the main resource. These rural biomasses are usually seldom considered in the usual energy statistics by oil companies and international energy agencies, but in a global framework they are part of the picture, because at least two thirds of human kind still lives in rural and craft economies not much different from those of the european middle age. Consider hay for animal feed. 160 years ago, in the United States two-thirds of the mechanical work came from horses, and in 1925 the horses were still about 30 million.

The direct use of solar energy and wind power (yellow in the graph) is currently estimated at about 10-20 million toes (millions, not billions) and so, on the scale of this chart it is invisible, since it meets less than 0.1-0.2% of the global need.



In this chart, which refers to year 2005, nations are divided into 10 groups homogeneous by type of economy, industrial development and intensity of energy consumption. For each group of nations, the left bar is the yearly consumption in billion toes; while the right bar represents the population, in billions; and the numbers in blue at the top indicate the intensity of energy consumption, expressed in toes per year per capita. Globally, in year 2005, about 6.4 billion souls consumed about 12 billion toes, with an average intensity of 1.8 toes per year per capita.

The graph shows very pronounced disparities in the intensity of consumption. It varies widely from country to country, depending on many factors such as the different geographical and climatic conditions, and especially the level of development and industrialization, as we can infer by taking a look at historical trends.



If we consider the bare survival, an active human body requires about 3000 kilocalories per day, equivalent to about 0.11 toes per year. At least that is the figure if you have to run and hunt to get you food. It is estimated that with the discovery of fire 500 thousand years ago, the per capita requirement doubled to 0.22 per year. Another doubling, to 0.45, is attributed to the Neolithic, due to additional consumption to heat the homes that replaced the natural caves, to feed animals, for which it was necessary to cultivate the fields, and later to extract and work bronze and iron. Within the Roman Empire, the increase in demand was counterbalanced by the progressive improvements in the efficiency of use. With the use of water to power mills, wind propulsion to power ships and then also wind mills, and with the use of oil and bituminous products for lighting, the per capita consuption settled to about 0.5 toes per year, and did not change much until the 19th century. But then the transformation from rural to industrial economy in very delimited geographic areas, beginning with England, involved a rapid increase in the demand for coal. From 0.55 up to 2.8 toes in one century in England. In the next century, following complete industrialization, even if GDP more than doubled, the per capita energy demand grew only up to 3.5 toes per year.

In the case of Italy, from the rural-and-craft greek-roman economy to about 1900 there was no substantial change in the per capita consumption: 0.5 toes per year, mainly from renewable sources. Industrialization started around 1913 and was complete by 1981, with the gross agricultural product down from 42% to 6.4%, and, like in England during industrialization, the per capita energy consumption up to 2.5 toes per year, by a factor of five.

Overall, in the last two millennia, the global demand of energy had a 70-fold increase, the population a 20-fold increase and the per capita consumption little more than a 3-fold increase (from 0.5 to 1.7 toes per year). The transition from renewable energy sources (wood and forage) to massive use of fossil fuels, has accompanied and allowed the processes of development and industrialization, which allowed profound changes in the quality of life.

So, industrialization, and its direct correlation with per capita energy consumption, is a key factor in attempting a reasonable forecast.



There is a strong inverse correlation between the per capita consumption of energy, and various factors and indicators of social and economic development, especially the fertility rate and hence the rate of population growth. Energy allows improvements in the standard of living, broad access to health care, use of contraceptives, longer life expectancy, services that increase the level of literacy and access to information, working opportunities for women. The per capita energy consumption emerges therefore at the same time as an index and as an instrument of social and economic development. A most important feature of industrialization is the lesser need to have many children and numerous families, which in rural societies is necessary for survival and to sustain the unproductive members of the group, the children and the elderly. Countries with high standards of living and higher per-capita consumption, have very low or no population growth. Underdeveloped countries have high growth rates, sometimes doubling the population every 25 years. These graphs show that an important threshold in the development seems to be at one toe. Social conditions improve, life expectancy reaches 70 years, fertility decreases and population growth slows down. Having many children becomes a luxury, that only exceptionally rich countries can afford.



Clearly there is no room on Earth for an indefinite population growth. Most studies agree with the estimate that a sustainable future for our planet requires the global population to stabilize around no more than twice the current population and that this will occur during the current century. But population growth rates will vary greatly from region to region of the planet, depending, as we have seen, on the current stage of development and, hence, their per capita energy consumption. On this basis, the chart shows the expected population growth for the rest of the century, for each of the 10 groups of countries we already identified.

We will pass from 6 to 11 billion people. Growth will stabilise in all countries, soon after they pass the threshold of 1 toe per year per capita. Africa and South Asia today host a third of human kind, at the end of century they will host a half. North America, Japan, Australia, New Zealand, Europe and former Soviet Union states, will drop from todays overall 22% to only 13%.



While this social and economic development takes place, also technological development continues. The efficiency of exploitation of energy resources, and of all end uses of energy carriers, will continue to improve.

This graph shows how technical and scientific research has resulted in a steady improvement of energy conversion machines. The chart spans the last 300 years, from the first steam engines at the dawn of the industrial revolution in England, to the modern combined-cycle power plants and fuel cells. On the right scale, it shows the thermodynamic effectiveness of the best-available mature technology for converting fossil-fuel availability into mechanical work and electricity, which is now around 60%. We are talking here of the thermodynamic effectiveness which we also call ‘second-law efficiency or exergy efficiency’. The scale of this graph is not linear in the effectiveness itself, but in the logarithm of the ratio of the effectiveness to one minus the effectiveness, as shown on the left of the graph. The fact that it is a remarkable straight line on this so-called logistic scale, is a typical feature of any learning process. The importance of this impressive correlation of historical data, is that it shows at which rate progress will continue. We can be quite confident that by the end of the century, we will have mature energy conversion technologies, with thermodynamic effectiveness well over 80%. If we show the same data on a linear scale, the graph would look as follows.



The typical S-shaped curve of a learning process; where the learning rate is at any time proportional to the current effectiveness and the current room for improvement. The time constant turns out to be about 60 years. Sixty years to change the ratio of eta over one minus eta by a factor of e. Sixty years to go therefore from the current 60% to 80%. This graph is very exciting, especially for people like us who work in thermodynamics: as you see, we are still about midway of our learning process about understanding and mastering and engineering the laws of thermodynamics. That's why thermodynamicists are going to be in business for a long while.



Similar improvements will of course obtain also in all the end-uses of energy, so that at the end of the century the overall life-cycle efficiency will be doubled, that is, the current standard of living, possible today in Europe with a per capita consumption of 3 toes per year, will require only 1.5. Here is a scenario of trends of per-capita consumptions for the same 10 groups of countries already cited. Countries that are already industrialized will improve their efficiencies, and contribute to better efficiencies in developing countries, partially mitigating the sharp increase intrinsic in their process of industrialization. Overall, from the 1.7 world average at the beginning of this century, we will end the century with a world average of only 1.4, in spite of the industrialization of most of the globe.



We can now combine this chart with the one about demographic growth we just discussed, and multiply, for each group of nations, the per capita demand by the expected population. Thus we get an estimate of the energy needs for each group of nations.



Here is the resulting scenario. The global demand will keep growing quickly for a few decades, but then will stabilize at the end of the century to a value of about 16 billion toes against the current 12 billions. The impact of the most industrialised nations will grow slightly in the first two decades, but then will reduce to 3 / 4 of their current needs. Compared to global needs, however, the marginal impact of today's most industrialized countries will fall from 60 to 28%. Instead, Africa and South Asia will rise from 10 to 33%.

Given this forecast of energy needs, a much more disputable affair is to predict how the mix of energy resources, used to satisfy them, will evolve. The various possible scenarios depend on many variables, especially the geopolitical context that will develop. But a most important factor, is the intrinsic inertia of the energy market.



This graph shows the historical trends of the market shares of the various primary sources, again shown on a logistic scale, namely a scale linear in the log of f over one minus f, where f if the market share of each resource.

The laws of the market have resulted in the gradual competition and replacement of resources, from wood and rural biomasses to coal, from coal to oil, to the current mix of resources that sees gas about to overtake oil. The slopes on this chart warn about the huge inertia of the economic and energy technology system. We need tens of years for a new resource to reach a significant market share. The very life of production facilities ranges from 20 to 40 years. It is obvious that the inertia of the system involves long response times and long-term returns of investments.

So, taking into account this intrinsic inertia…


...here is a plausible scenario for the future mix. It is very balanced I believe, though quite optimistic in some ways.

While natural gas will overtake oil in about 10 years from now, coal will maintain its share and will overcome both in about 50 years, because it will be used with increasingly clean technologies. Nuclear energy will continue to grow in the (optimistic) hypothesis that the geopolitical context will stabilize, allowing for ways to manage the military risks, and that technology will solve current environmental safety and radioactive waste management concerns. Notice how solar and wind will gradually climb up to a sizable fraction. And also how biomasses will keep their market share, as there will be a shift from rural biomasses to industrial ones such as switchgrass and solid wastes.



Let’s view this scenario in a more natural graph showing how the mix of energy resources will match the predicted energy demand.

So, oil consumption will peak in about 10-15 years and then will start to very slowly decline, due to the progressive but slow depletion of the current wells, and the decreased rate at which new wells are found. Natural gas and 'clean' coal will take up oil's role and will alternate as the predominant resources of the century. Probably also some non-conventional oil resources, such as tar sands, bituminous shales, heavy crudes and methane hydrates, will start contributing.

Renewable energy consumption will increase, thanks to increasing deployment of hydroelectric resources, increasing 'sustainable' uses of biomasses and solid wastes.

Direct solar power, wind power, tidal power and other renewable, or better, quasi-inexaustible resources, will certainly increase significantly, but they will keep a marginal role for the entire century.



A practical consequence of the inertia of the system, is that any uncoordinated local or national energy policy, not well-weighted and well-concerted internationally and globally, cannot possibly change the course of the system. Not only such a local energy policy would be ineffective and dissipative, but it could even reduce the confidence of operators in the stability of the economic and regulatory context in which they are called to make investments. For this reason, I am not so sure that federalism and free market are good ideas for the progress of energy efficiency; the french have it right, and president Obama is now following, and trying to regain to the federal government the controlling role that it needs, in order to lead the energy new deal that he wants.

Well, anyway, from this scenario of demand and mix of resources, we can calculate the cumulative consumptions per resource at the end of the century. And compare them with the cumulative consumptions so far. In the next slide, we will compare these numbers, with current estimates of the available reserves, of fossil and nuclear fuels, to decide if we have enough inventory to satisfy the predicted demand of each resource.



Here is the histogram that compares past consumption and future demand with the known reserves of oil, natural gas, coal and nuclear fuels. The red bars indicate how much we have already consumed up to the last century, the orange bars how much we will consume in the current century (according to the scenario proposed), and the blue bars indicate conventional reserves, that are considered either proved or highly probable at current prices.

Further bars indicate the resources that with today's methods are considered non conventional and not potentially recoverable, but that presumably could be developed on the time scale we are considering, which includes breeding fission technologies and the Thorium cycle.

It is quite clear that reserves will last well beyond the current century.

Thus the allegation that primary energy reserves are scarse, which is constantly repeated by the press, by politicians at all levels with obvious demagogical purposes, and by aggressive futurologists like Jeremy Rifkin whose sole interest is to sell their books and speeches, is clearly false and unfounded. There is no shortage that will prevent or impede the impressive social and economic development expected in this century by the emerging countries. When a resource gets scarce, the markets will adjust, but we will not remain out of fuels for a very long time.

And we didn't mention nuclear fusion here as an option, because of the difficulties it still encounters in the labs and because of the decades that will separate physics laboratory demonstration, from engineered industrial installations, and from gaining a sizable share of the market. In any case, we all know that reserves for fusion would be plentiful, as Lithium is a most abundant element.

So, the concern is not scarcity, but rather the fact that in the long-term the second most abundant resource (after breeding nuclear fission) is coal. Well-known environmental concerns derive from the hypothesis, that the amounts of greenhouse gases introduced in the atmosphere by the anthropic exploitation of fossil fuels, may significantly influence the thermal balance of our planet, affecting clima and melting polar ice-caps. This hypothesis is in fact pushing towards an increase in our energy consumption, in order to seize and confine part of the carbon dioxide that we release by oxidazing fossil fuels.



Indeed, for each toe of primary energy obtained by oxidation of fossil fuels, the carbon dioxide emission can be estimated to a very first approximation, by simple stoichiometry. It is 4 tonnes of CO2 for coal, 3.1 for oil and 2.3 for natural gas. Better numbers would require considering the full life-cycle, from-well-to-final-use, of each of these fuels. In the next slide, we will apply these rates to estimate the overall CO2 emissions implied by our scenario.


I have no time for a digression on the role of waste-to-energy technology, with respect to greenhouse gas emissions. So I will only mention that if we burnt all our municipal wastes in modern waste-to energy power plants, the overall primary energy savings would not exceed 2%, but the greenhouse gas reduction would be about 5%.



Going back to the anthropogenic CO2 emissions for the next century, this is the scenario that results from the energy consumption scenario we just discussed. If during the last century human kind has released a total of 300 billion tonnes of carbon in the form of CO2, in the current century we will release another 800 billion tons, an average of 8 billions per year.

This anthropogenic release is certainly not a negligible amount, even if it is a relatively small fraction of the complex natural balances and exchange mechanisms, by which carbon accumulates on the surface of our planet and in the ocean depths, determining the natural concentration of CO2 in the atmosphere.



The 8 billion tons of annual, anthropogenic, energy-related emissions, are about 5% of the amounts exchanged every year in the natural carbon cycles, regulated by the production of biomass for photosynthesis, decomposition of biomass plants and animals, and mass exchanges accompanying seasonal temperature changes. Every year, the atmosphere exchanges 60 billion tons of carbon with the land surface and 90 billions with the upper layers of the oceans, which in turn exchange about 100 billion tons with the intermediate and deep layers. The deep ocean is very important because carbon dioxide, which is heavier than both air and water accumulates in large and stratified amounts in the ocean’s depths, where the absence of convective mixing and the high pressures, maintain very large and stable viscous lakes of supercritical carbon dioxide.

Today the atmosphere contains about 750 billion tons of carbon in the form of CO2, the surface layers of the oceans contain 1000 billion tons and the earth's surface 2200, while the deep ocean layers contain 38000. So, the overall cumulative anthropogenic emissions during this entire century, which we predicted to be 800 billion tons, amount to about 2% of the overall natural reserves of carbon, but are about 20% of the surface amounts. 

But of course, what matters are the rates at which these natural mechanisms can metabolize the amounts of new CO2 we keep injecting.



Of the 300 billion tons of carbon we emitted during the past century, only about 45% appear to have been metabolized. We infer that from the fact that only 55% of that has accumulated in the atmosphere, causing an increase in CO2 concentration of 80 ppm, from 280 to 360. Assuming the phenomenon is still in its linear phase, as suggested by the fact that the anthropic contribution is a relatively small fraction of the natural mechanisms, we may estimate that of the additional 800 billion tons we will inject in this century, still only about 55% will remain in the atmosphere, meaning that the concentration will go up another 220 ppm, to a final 580. If we believe in the apparent direct proportionality between increase in CO2 concentration and increase in average surface temperatures, then we would predict another 1.6 degrees Celsius temperature increase, on top of the 0.6 degrees already occurred, with all the climatic changes that will follow.



Looking at the well known data obtained from Antarctic ice cores, we see from the upper graph that CO2 concentrations have never gone above 300 ppm in the past 400 thousand years, while it was above 300 ppm during most of the past century. So, indeed, this anomaly may well be correlated with our industrialization, and these data are compatible with the view that anthropic emissions are to be held responsible for the climatic changes.

Other evidence, however, shows that at the end of the last ice age, deep-ocean temperatures warmed up 1000 years earlier than the tropical surface ocean, well before the rise in atmospheric CO2. This findings suggest that the rise in greenhouse gases then, was rather a result, of warming, than its main cause, although later on it is likely to have accelerated the meltdown by positive feedback.

So, some doubts remain that the enormous costs and efforts, also in terms of additional primary energy consumption, that are necessary to obtain significant reductions in greenhouse gas emissions, could be easily rendered vain by small fluctuations in the many broad natural mechanisms that regulate the thermal equilibrium of our planet.

But we should be optimistic and accept such costs during the current transition century, and view them not only as buying us an insurance policy until we understand better if we can do without CO2 segregation, but also—as president Obama is saying—as buying our economy a new ‘clean energy’ goal that will help us recover from the economic crisis.



It is important though that legislators, politicians, media, and ultimately the people, should not lean on disinformation or cheap futurology, and should not be tempted by false promises of easy solutions. Decision makers and everybody else should never forget the characteristics of complexity, inertia and globality, of the social and economic context, in which the energy and environmental problems are embedded.

For example, especially in Europe, in the name of sustainable development, a lot of research money has been attracted to the mirage of the so-called hydrogen economy. The idea that the synthetic production of hydrogen fuel from water, could serve as an energy carrier, alternative and better than electricity, is illusory and misleading. Electrical energy, even if produced by wind turbines or solar photovoltaics, is very valuable. If we use it to split water into hydrogen and oxygen, to recombine them later on in a fuel cell, we waste more than half the electricity we started with. Such a poor use of electricity ought to make sense only in very marginal situations, certainly not to thrive any economy.



To decide whether to invest on hydrogen cars or on battery operated electric cars or hybrid vehicles, we must take into account the entire energy life cycle, from well to wheel, and we must compare the alternatives on equal grounds.

The data proposed by automotive experts by considering all possible scenarios, seem to confirm that electric battery and hybrid cars are the least consuming, on a well-to-wheel perspective. I have no time here for the details, but the you can check them later, as my viewgraphs are online. But the conclusions appear as follows.



If the primary energy used to produce hydrogen and electricity is a fossil fuel, the best hydrogen car combination consumes 40% more that an electric or hybrid car, which means also 40% more CO2 emissions. If instead we start from electrical energy, generated for example by wind turbines or solar photovoltaic cells, then the prospect for hydrogen cars is even worse, as the best combination consumes more than double the electricity than an electric battery car.

So, I don’t know how it could happen that so many politicians and, more surprisingly, most american and european car industries, preferred to invest on fuel-cell hydrogen car research than on the development of better electric batteries and hybrid vehicles, especially as they were all aware that some japanese carmaker was betting on hybrid cars.



Now. What’s the role of thermodynamics? Well, many of you will remember that about thirthy years ago, most of the science and engineering community used to say that thermodynamics is a ‘dead subject’, that it deals only with equilibrium states, and all that is to be said about equilibrium has already been said. The last ten-fifteen years proved that they were dead wrong. Thermodynamics is back, up and strong, and it is fostering the transition to new energy technologies that will make our globe more efficient and cleaner.

Perhaps thermodynamics is the only discipline that not only has survived, but has played a key guiding role in three and a half scientific revolutions. The first was relativity. The second was quantum theory. Then came the half revolution about information theory, convincing most people that entropy is just a subjective measure of how much we know about the true physical state of a system; but fortunately that ended when also physicists finally convinced themselves that ‘information is physical’. The third revolution is ongoing, and it is the nonequilibrium revolution. We are still at the dawn of this one, but it’s pervading all science and engineering disciplines: biology, biomedicine, bioengineering, chemistry, electrochemistry, interface phenomena, fluid mechanics, and trasport phenomena. At all scales --macro, meso, micro-scopic -- thermodynamics plays a central role. Entropy, that in most textbooks is still taught as an equilibrium-only concept, plays instead a central role also for nonequilibrium states. We have indicated twenty years ago how entropy can be rigorously defined also for nonequilibrium states. It is time for new thermo textbooks to include this definition also for undergraduate courses.

Anyway, the next technological transition requires engineers, physicists, chemists, and biologists teaming up together to devise efficient ways to gain better control of nonequilibrium states, to take fuller advantage of their spontaneous tendency towards equilibrium. This is why thermodynamics has returned at the forefront of research in all fields. Entropy and nonequilibrium are the real challenges for the future of energy. Research and progress is of course already under way, mainly feeding the transition from traditional flame combustion, to multi-step combustion and fuel-cell electrochemical oxidation integrated with the thermal-cycle. It is also under way in many other fields that will impact on the efficiency of the end uses of energy.



Now, let’s return to this very exciting diagram about the logistic historical growth of the second law efficiency of the best available mature technology in fossil fuel conversion to work and electricity. It is a monument to our consistent improvements in the way we used the talents that Nature gave us in the form of primary resources and brain. Below the time axis, in this version of the chart, I added a selection of pioneers whose intellectual contributions were directly or indirectly instrumental to achieve this impressive steady progress.

You may observe that the names I selected, with few exceptions, are not those of the inventors of new equipment, but of the fathers of thermodynamics science. This is because many big dents in innovation often find their seeds in new science, which provides better fundamental understanding. Once we gain a better understanding, we can transform it into ideas to better control the natural phenomena, and finally design new more efficient equipment.

Noone of course knows today what technology will prevail in providing the expected transition from 60 to 80% efficiency in the next sixty years. But a few advances in that direction have already been made, at least on paper or at the experimental stage. The direction in which we should move is again indicated by thermodynamics: we need to find ways around the intrinsic inefficiencies of our current flame-based technologies.



The current technologies based on burning our primary fuels, have an inherent source of inefficiency in the flame itself. For example, in a combined-cycle power plant, the combustion irreversibilities eat up about 30% of the fuel exergy, while the remaining irreversibilities in the entire power plant account for only a small additional 10-15% of exergy loss. Multiplying the 70% efficiency of the flame by the 90% efficiency of the thermal cycle, yields the 63% overall efficiency which today a few full-size combined-cycle power plant can achieve.

The important lesson here is that future technology cannot possibly reach 80% overall efficiencies by just improving the thermal cycle process; for, even if we made that 99% efficient, the overall efficiency will be limited to 70% due to flame irreversibility.


It has been known for a long time that combustion efficiency would be higher if the thermal cycle could operate on higher temperatures of the products of combustion. Such higher flame temperatures would be easily achieved by preheating the air intake or using, instead of air, pure oxygen mixed with recirculated products of combustion. But high temperatures are a serious challenge to all materials. Thus, an important alternative is to replace the usual single-step-combustion flame with a sequence of partial oxidation steps, such as methane reforming, coal gasification, water-gas shift reaction. In the end, the fuel is oxydized into water and CO2 as usual, of course. But, sequencing the fuel oxidation into partial steps, involving intermediate reactants such as carbon monoxide and hydrogen as well as electrochemistry, allows a better integration of the partial combustion processes with the thermal cycle, thus obtaining a better control over nonequilibrium states and hence the entropy production by irreversibility.


Along these lines, very promising oxy-fuel combustion technologies are being developed, with various new integrated cycles mainly developed in connection with CO2 sequestration for ‘clean coal’ power. The best designs seem to promise overall exergy efficiencies up to 47%, after full CO2 sequestration, which is quite remarkable.



The problem that must be solved to get around the inherently high irreversibilities of traditional combustion in adiabatic flames, is represented very schematically in this enthalpy versus entropy diagram. The blue curve shows the equilibrium states of the fuel-air mixture, the red curve that of the products of combustion.

In an adiabatic flame, a sequence of unconstrained highly nonequilibrium states develops rapidly from the blue to the red curve, horizontally, at constant enthalpy. On the entropy axis we read the entropy change, which is all due to irreversibility, and it is unavoidable because we have no control of the nonequilibrium states.


If we could control them, then we could extract work by exploiting the spontaneous tendency towards the red curve. This is what fuel cells achieve.


Many schemes have been studied which integrate solid-oxide high-temperature fuel cells with various combinations of thermal cycles yielding overall net efficiencies that on paper are as high as 76%. So this route is very promising, and makes the 80% next goal appear not so far away if the next generation of engineers, physicists and chemists will be trained to understand and master the foundations of nonequilibrium thermodynamics.



So, let me conclude with a recommendation that I am sure everyone in this room will not dislike. We need to invest much more money on research, especially open-ended fundamental research, including of course thermodynamics! But the question is: who should put the money?


Pharmaceutical companies invest 18% of their revenues in R&D. Semiconductor firms invest 16%. Energy companies invest less than 0.3% percent. Why so? I think it has to do with the large inertia of the energy system and the very long times that new technologies take to penetrate in the energy market.



In the United States the last 30 years provide clear evidence that in a free energy market we cannot expect energy companies to promote and sustain any energy technology revolution. It is the government which must do it, funding it together with any other fundamental research.  In 1980, 10% of the U.S. federal research dollars went to energy. About a year ago, the share was down to 2%. With that pattern of investment, the U.S. could not expect to be a leader in the development of the 80% efficient technologies which we forecast for the second half of the century.


Now, fortunately president Obama seems to have understood this. He wants to gather together all americans in a new dream of being leaders of a clean energy transition, as a means to regain an economic leadership. So, his first step was to make deals with the major universities, and invest heavily in research, including fundamental research (I still have to check if that includes, as it should, enough thermodynamics…)

So I conclude by expressing my hope that also the countries of the European Economic Area will increase funding in fundamental scientific research.



With this, I thank the foundation Jean-Michel Folon, for publishing the evocative works of art that I have freely adapted in the backgrounds of my slides. And I thank you all for your attention, and again for the kind invitation. Thank you.



Well, if I may add one other item to the picture: in the growing community of advanced energy systems and thermodynamics experts, it is well understood that if choices are to be made in the name of sustainable development, they should be based on full-life-cycle analyses that consider for each option the full sequence of processes, from primary energy to final use. So, we should lobby for better legislation and regulations that could ease and promote investments in efficiency improvements in all sectors, and improve the general understanding.

For example, in the US, cogeneration is still not as widespread as it is in Europe. Moreover, most national agencies still report statistics on energy productivity in the misleading language of the simple energy balance. For example, the US Energy Flow Charts compiled until very recently by the Lawrence Livermore National Lab add up heat and electricity on equal footing, they simply add energy used in the form of residential and industrial heating, to energy used in the form of electricity. This results in making the overall US efficiency of primary energy consumption appear to be about 37%, while simple thermodynamic considerations prove that it is well below 20%. Adding heat and electricity in such statistics, is essentially a crime, in the same sense as it is a crime to burn a fuel and just get heat out of it, instead of using it better in a cogeneration facility, or it is a crime to use electric heaters in homes where heating is needed for more than six months a year...



This slide complements and supports the data shown in the lower part of the table on slide 23.