(Updated July
2004)
Electrical
and Communication Systems Division
National Science
Foundation*, Room 675
Arlington, VA 22230
pwerbos@nsf.gov
Back at the time of the great oil shocks of 1973 and 1979,
many of us in all sectors of the society asked hard questions about the
long-term prospects and sustainability of the world’s energy systems.
We asked: “Can we really survive all this, in the
long-term, and if so, how?” Despite the wide diversity of views, certain
core realities did emerge from the intense discussions and research of the
early 1980’s. Back in those days, I was the lead analyst at the Energy
Information Administration (EIA) of the U.S. Department of Energy (DOE)
assigned to evaluate the various long-term models and analyses, the technical
and economic assumptions going into them, the common theories, and what we
could really learn from them all. (For example, see the sources cited in my
chapter “Energy and Population” in Lindsey Grant, ed, Elephants
in the Volkswagen, W.H. Freeman, 1992. See also the special issue on
Engineering-Economic Modeling in Energy: The International Journal,
March/April, 1990.)
There was, and remains, a
bewildering complexity of numbers, technologies, and viewpoints which no one
could discuss completely even in a short book; however, certain core themes
remain valid to this day.
In order to survive, in the
long-term, we need to meet two core challenges:
(1)
Car fuel: we need to
continue to find larger and larger affordable quantities of fuel for cars,
trucks, busses, etc., even as conventional oil supplies become more and more of
a problem. This is once again an urgent short-term issue for the world, as we
contemplate the relations between the Middle East and the industrialized world
over the next two decades, and as we consider the needs of developing nations.
The key “number” we have to watch is the supply of the specific forms of energy that we can use in
cars.
(2)
Primary
Energy: we do also need to watch the “total energy”
number. We don’t have to worry about having enough total energy right
now, but we do have good reasons to worry about the price of electricity, the
long-term trends, and our ability to hold down costs while also respecting the
two constraints below.
The challenge is made much harder by the connections between
energy and other sectors of the world. The survival of the human race is not
just a matter of energy as such. In fact, issues of war and peace and weapons
of mass destruction directly threaten our survival far more than energy does directly. Those larger issues depend a
lot on issues like education, population, water, food, spiritual culture,
biodiversity and the role of women which are far beyond the scope of this
essay. The key challenge in the energy sector as such is to improve the two
“numbers” above, while also
respecting two key constraints:
(3)
Environmental
Sustainability. The most important single number we need to worry about is
the amount of CO2 in
the atmosphere. Yet as we chart the near-term pathway to a sustainable future,
NOx emissions and stratospheric ozone also matter a lot. We may debate the
importance of these numbers and how it varies over time, but certainly these
are major global concerns.
(4)
Global
Nuclear Security. Scenarios have been considered for meeting all the
energy needs of a growing world economy, across all sectors (not just
today’s electricity use), based on conventional local nuclear power plants.
In the Middle East, for example, economics would almost certainly require a
massive use of local construction companies such as Bin Laden enterprises or
certain famous Russian and Chinese all-purpose export companies. But it is
questionable whether the world is currently able to manage even the much
smaller flow of nuclear materials and technology across Eurasia today; there is
a compelling national security reason to do our best to avoid a growth in the
problem by orders of magnitude.
In summary, our survival depends on our ability to think
hard about the four key “numbers” here. It will be very difficult
to keep all four numbers on track, in the long-term, while still providing for
the essential growth in the world economy. The main goal of this essay is to
reassess where we stand in trying to keep these numbers on track – to
achieve global energy sustainability.
Before
I get into the details, however, I should address some concerns that some
people might have about this problem statement.
First,
I do not claim that energy sustainability is the only challenge before us. For example, the gap between rich and
poor is growing in some areas (e.g. US versus Africa) but shrinking in others
(e.g. Japan versus China). The world does need to address
other dimensions of sustainability. But energy sustainability is one of the key requirements for human
survival in general.
Second, there are other issues
which the economy needs to address in the energy sector. We will need to
provide aviation fuel and feedstocks for plastics, for example. We will need to
provide an optimal mix of incentives to give consumers the benefit of all
available energy technologies, not just the big ones which allow us to keep the
Big Four Numbers in balance. But if we do not keep the Big Four Numbers in
balance, our economy dies, and the fine-tuning is for naught. This essay will
focus on the Big Four Numbers, and complicate things only when it is really
necessary.
Third, with due respect to Amory
Lovins, I do not really believe that conservation and small-scale renewable
would be enough to balance the Big Four Numbers. Aggressive conservation
technologies will be a major part of this essay – but the biggest
potential for conservation comes from identifiable large-scale technologies that
will be part of this discussion. The most sober studies have consistently shown
that the smaller-scale stuff is like the fine-tuning mentioned above. For
example, technologies like hydropower can be very important in some areas, like
Brazil, but
its potential is very far from what we need to balance the
four big numbers for the world as a whole.
The fine-tuning deserves support and attention, but it is
not enough to solve the basic problems here.
Fourth,
you may notice that I used the phrase “essential growth.” Many
economists have noted that GNP as such is a very poor measure of human
well-being. There is a huge literature on alternative indices or measures of
human well-being. Clearly it would be grossly irrational for the governments of
the world to straight-jacket humans everywhere, to force them to become an ever
more robotic workforce in a quasi-military crusade to maximize GNP. This would
be a gross parody of the values of human freedom and human sovereignty which
are the historic roots of free-market economics. Nevertheless, in practical
terms, the world will still need growing supplies of energy in order to provide
more decent choices to a majority of the people on earth, even if we firmly
resist the extreme forms of materialism.
Fifth,
I will not be discussing any options for gross distortion of market economies.
Occasionally governments have attempted to use compulsion or quotas to force
people to use technologies which cost much more than politically incorrect
alternatives. There are some incentive structures which do need to be
revisited, but the goal here is to discuss new technologies and other options
which are aimed at getting results within
a market-oriented economy. Technologies which cost more than the alternatives,
without proving some other important benefits to the consumer, will not be
proposed. The word “we” should be interpreted to mean
“we” human beings, not “we” U.S. government officials;
we human beings have a lot of work ahead of us.
Sixth, this
paper is being updated in July 2004, in order reflect a few of the key points
which have emerged in the intense follow-on discussions within various nodes of
the Millennium Project.
In
summary, the main challenge before us is the same as it was 20 years ago
– to push the same four numbers we had to worry about 20 years ago. But
the options and technologies before us have changed dramatically. The time is
ripe for a radical reassessment.
Because
energy options are so complex and varied, we must be very careful in how we
phrase and understand the key questions here.
I
will focus on three key questions of interest here:
(1)
How could we someday zero out the net CO2 emissions which result from
powering a growing fleet of cars and trucks?
(2)
How could we keep these fleets going when natural
supplies of oil (including tar sands) start to “run out” (rise to
prices beyond what the economy can bear)?
(3)
What could be done to get us from where we are now to
complete sustainability by these measures?
In the next subsection, I will focus on the first two
questions. These questions may seem narrow, in a way, to some energy economists
– but it is essential to focus on very specific questions, to begin with,
because of the enormous complexity of the world energy system. The following
subsection will address more of the complex near-term challenges.
Many
futurists have argued that the answer to question 1 and 2 is simple: that we
must shift over to a “hydrogen economy.” The “hydrogen
economy” is a very broad and complex “meme.” (See the
National Hydrogen Plan and the National Hydrogen Strategy posted at
http://www.eere.energy.gov/hydrogenandfuelcells/ .) In this section I will
focus more narrowly on the core hypothesis of that vision: that we should plan
for cars and trucks which carry hydrogen as such on-board as their primary
source of energy.
Hydrogen
storage on-board cars seems at first to be a very serious option here. If we
can make it work, either with fuel cell or combustion engines, we end up with
zero CO2 emissions from cars. And we don’t need gasoline to
put in our cars.
But
there are at least four other ways to zero out net CO2 which merit very serious consideration,
in my view: (1) electric cars; (2) continued use of carbon-based fuels coupled with recycling of CO2
from the atmosphere (or upper oceans); (3) use of carbon-free fuels like
ammonia which easily be “reformed” to generate hydrogen on-board a
vehicle; (4) use of “thermal batteries,” which can store heat in a
car for later use by a heat-to-motion engine, like the new STM system described
at www.stmpower.com.
In
this section, I will discuss how we can explore and use these 5 key
technologies in the coming decades. But first I owe you two caveats. First,
there are some higher-risk technologies, like wind-up cars and compressed air
and others, which have some very intense enthusiasts. Many of these do merit
continued high-risk exploratory research, in hopes of a breakthrough. However,
the latest information I have seen is not yet enough to warrant putting them on
the “A list” of technologies which might provide the foundation for
assured survival of the world economy. This list has changed over time, and we
need to be ready for further changes – but I don’t see them there
as yet, based on any information I have had access to. Second, there are some
options, like toluene as a hydrogen carrier, which are hard to classify here;
if they aren’t reformable, and they don’t lead to CO2 emissions
from cars, I would classify them as part of the hydrogen economy option.
Which
of these 5 technologies offers the greatest long-term hope of holding down the
cost of car fuel?
We
really do not know as yet which of the five will work out – if any.
Detailed life-cycle cost estimates are extremely speculative at this time, when
we don’t even know the best way to implement any one of the five. A
rational strategy for developing these technologies would rely heavily on
concepts like decision-tree analysis and “buying information” to
reduce uncertainties, in the spirit of wildcat drilling. But we can learn a lot by doing some
straightforward comparisons. The
comparisons in this paper could be refined much further by more careful
analysis of the concrete issues raised here– but even these simple
comparisons are enough to bring out the strategic picture we are facing.
First
of all, we may ask – if cars use hydrogen, in an “ultimate”
scenario using no fossil fuels, where would the hydrogen come from, and how
would it be transported?
There
are two obvious large-scale nonfossil sources of hydrogen: (1) use of primary
energy sources, like nuclear plants or solar heat or photovoltaics or wind,
which can also produce electricity at a comparable or better efficiency; (2)
processes based on artificial or bioreactor photosynthesis, like
biophotohydrogen. (Conversion of ordinary biomass, like wood or corn husks, to
hydrogen would not have a potential scale of output relevant here, because of
limits to the sustainable supply of such biomass [OTA report].)
If
the first of these sources wins out, then the best way to transport the
hydrogen to market would be indirect.
Instead of transporting the hydrogen directly as a gas, it would be far cheaper
to make electricity at the source, send it to the market over the electric
power grid, and convert the electricity to hydrogen at or near the “gas
station” by electrolysis.
Why
not transmit the hydrogen by pipeline instead? The National Renewable Energy
Laboratory (NREL) has published numbers for hydrogen pipeline costs which seem
plausible, at first. (See C. Padro and V. Putsche, Survey of the Economics
of Hydrogen Technologies, NREL/TP-570-27079, September 1999.). However, the
citations which go with their discussion clearly establish that pipelines of a
given volume could only transmit half as much energy as natural gas pipelines
of the same dimensions. Well-established hydrogen embrittlement problems (cited
but not incorporated into the estimates) would probably double costs again or
more. If we consider how many billions of dollars and years (and energy losses)
went into the gradual buildup of the US natural gas pipeline network, and then
multiply by four or more – clearly this would not come cheap. By
contrast, the US already has a massive electric power transmission grid which
is used up to capacity only at times of peak load. Why should we pay hundreds
of billions or trillions of dollars for a transport service we could obtain
almost for free? Furthermore, why should we wait
for the decades (at best) to build a new infrastructure, when we already have
one available today?
There
are two reasonable counterarguments here.
The first is that some sources of
electricity are very hard to connect to the electric power grid at present.
Certainly we have heard stories about wind generators that effectively
delivered negative energy to the
grid, when they were hooked up in a naïve fashion. But competent power
engineers, like Mohammed El-Sharkawi of the University of Washington, have also
reported how they were able to go back and fix the hookups to substantially
reduce such problems. There is a new paradigm for electric power grid control,
dynamic stochastic optimal power flow (DS-OPF), which ought to be able to
overcome these old problems – and also give proper credit and payment to
small-scale generators of electricity. (See J.Si et al, eds, Handbook of
Learning and Approximate Dynamic Programming, Wiley and IEEE Pres, 2004.)
Certainly the development and deployment of this kind of technology should be a
priority in the quest for energy sustainability.
The second argument is that the
energy losses in electrolysis might outweigh the many costs (including some level of energy losses)
in hydrogen pipelines. I must confess that I do not have numbers here. Chemists
for years have reported efficiencies near 100 percent for known electrolysis
processes – yet DOE is investing heavily in reverse fuel cells and the
like expected to achieve efficiencies more like 80 percent. By analogy to other
such systems, I would guess that the discrepancy is a matter of scale. Thus one
could imagine hydrogen being made from electrolysis at a metropolitan level, to
capture the economies of scale, and moved around very locally, most likely by
truck, or else being produced
literally at the gas station. The whole scheme requires very good hydrogen
storage in vehicles in any case. One may debate the relative likelihood of the
metropolitan option versus the gas station option, but it seems reasonably
likely that one or the other would be efficient enough to outweigh the huge
costs, delays and problems with long-distance pipelines. (Note added in August
2003:
Gene Berry of Lawrence Livermore has explained to me how it
is easy to get circa 65 percent efficiency, and approach the theoretical 80+
percent limit, using small scale electricity-to-H technology very familiar to
some of us; very large facilities inputting an optimal mix of heat and
electricity can do better in theory, but the overall economic advantage is
speculative and questionable, and marginal at best. Many earlier more
optimistic numbers were based on misleading ways of measuring efficiency.)
This
then leads to an obvious question: if we are using electricity from the grid to
make the hydrogen, why not insert the electricity directly into the cars instead? Wouldn’t this save a lot of
wasted energy in conversion back from hydrogen to electricity on board the car?
If energy efficiency were the only consideration, one might expect that
electric cars would win over hydrogen, hands down, if the hydrogen ultimately
comes from the kinds of primary energy sources I just discussed.
Before
discussing the other considerations, let me go back to the other possible
renewable source of hydrogen – biophotohydrogen. How might that change
the equation here? I have heard estimates of efficiency for photosynthesis
ranging from 3 to 8 percent – all much less than the 30 percent or so
achievable by proven solar thermal technologies, like the Sandia solar thermal
technology using the SAIC design for mirrors to concentrate light and STM
systems to convert heat to motion and electricity generation. One may ask: how
could biophotohydrogen have any hope of competing with that, assuming that the cost
per square yard of mirrors will always be much less than the cost per square
meter of biocultures? In fact, the NREL cost estimates for biophotohydrogen
show grossly noncompetitive costs – except
for a kind of placeholder number for hopes of a certain set of high-risk
breakthroughs. (See Wade Amos, NREL, Cost Analysis of Photobiological
Hydrogen Production From Chlamydomonas reinhardtii Green Algae, September
2000.)
Included in those proposed
breakthroughs is a concentration of light, by mirrors, into a bioculture which
is asked to function under 10 to 100 times as much light as we normally
encounter on earth. It would certainly not violate the laws of physics (so far
as we yet know) to develop organisms
capable of such novel and useful behavior, but this is certainly an area
for high-risk basic research, with no guarantees of ultimate success. It would be a major challenge to develop
new kinds of adaptive nonlinear control powerful enough to keep organisms alive
and functioning under the unique and stressful conditions
envisioned here. Conventional adaptive control would almost
certainly fail, because bioreactors are generally “nonminimum phase
plants.” (A lengthy example was discussed by Lyle Ungar of the University
of Pennsylvania in the NSF workshop on biocontrol back in 1990 organized by
Peter Katona.) Linear robust control can work well enough if the plants are
close enough to linear and disturbance-free, but the chances of that do not
seem encouraging here. However, Donald Wunsch of the University of Missouri
Rolla has a paper forthcoming in the IEEE Transactions on Neural Networks which
reports success with a new nonlinear intelligent control scheme for
bioreactors. There may be hope, if we press the technology hard enough. Or not.
But we are critically dependent on researchers able and willing to break out of
the existing established paradigms. It is much harder to find researchers able
to do this than researchers who promise it. Many more traditional control
engineers would be far less optimistic than myself, when confronted with this
kind of plant.
If
biophotohydrogen should be workable, in the end, what would that imply for the larger picture?
First, we could expect that
artificial photosynthesis would also
provide a way to extract CO2 from the air (or from surface level
seawater). The net benefits would be the sum of two main products – the hydrogen production and the CO2
removal. Second, biology suggests it should be far easier to produce carbon-based fuels, using this CO2
extracted from the air, than to produce hydrogen. (Indeed, the NREL reports on
biophotohydrogen makes it clear that the processes they are looking at are far
more complex and esoteric than the vast, prevalent normal mechanisms for
organisms to produce carbon compounds). If biophotomethanol were used as a car
fuel, instead of hydrogen, we would still end up with zero net CO2
emission – but we could use the existing world infrastructure for
handling liquid fuels, and our chances of success would be far greater.
Now
let us consider these tradeoffs more carefully. If the economy chooses any of the four options discussed so far
– electric cars, hydrogen produced “at the gas station,”
biophothydrogen or biophotocarbon fuels – net CO2 emissions go
to zero. Thus in any of these long-term scenarios, the net value of further CO2 reduction goes to
zero in the long term. Once the CO2 premium goes to zero, I would
argue that biophotomethanol totally dominates over biophotohydrogen, in
economic terms. Methanol can be used in fuel cell cars, just as hydrogen can;
decades ago, Pat Grimes (then at Allis-Chalmers, writing in the AlChe
Proceedings) demonstrated a direct methanol fuel cell with higher efficiency
than the well-publicized hydrogen PEM fuel cells reported in recent years. The efficiency
penalty of using methanol instead of hydrogen may be as much as 10 percent, for
optimally designed cells, but transport losses in hydrogen would make up for that; more important,
the fuel storage and transport infrastructure of methanol provides a huge
advantage over hydrogen, and production may be easier (Other carbon-based fuels
would impose a major additional efficiency penalty relative to methanol in use onboard optimally designed fuel cell
cars, because small-scale efficient steam reformers cannot be used with them.) Mirna
McDonald of Penn State University, under a small grant from NSF, working with
Grimes, has recently replicated some of Grimes’ technology for
carbon-tolerant alkaline fuel cells, which appear to promise far lower costs,
higher efficiency and longer lifetime than the more better-known PEM fuel
cells; likewise, related work reported at www.electricautos.com may contain
important complementary technologies.
In
summary, the long-term economically plausible alternatives seem to come down to
three – electric cars, hydrogen produced from electrolysis at or near the
gas station, and biophotomethanol. (There are also the two more esoteric
options mentioned earlier, which I will discuss later in this section.)
What
will drive the tradeoffs between these three? What kinds of research and other
efforts would give us the best chance of realizing the full potential of all three?
Let
us first re-examine the electric car option. After the great oil shock of 1979,
public enthusiasm for electric cars became enormous. Startup companies without
previous experience in making cars marketed a number of instant electric cars,
with performance so bad that many people now start out with a very large irrational
bias against this technology. More serious manufacturers, like General Motors,
worked hard to develop a more realistic, high-performance car – but even
that car had problems.
Until
recently, conventional wisdom amongst energy experts ruled out electric cars on
two grounds: (1) cost, both of batteries and
of supporting technologies like power electronics and control chips;
(2) limited driving range, due to low power densities in
batteries.
But
now things have changed. Not only electric cars, but fuel cell cars and
conventional hybrid cars make heavy demands on power electronics and control
systems. (Indeed, pure electrics tend to be simpler than these others.) Years
ago, those same cost factors made hybrid conventional cars unaffordable.
When the new Honda and Toyota hybrid cars first appeared on
the market, the automotive grapevine said that these companies were swallowing
or subsidizing as much as $100,000 per car, just to build up an early market
and invest in experience. But as a result of that experience, Honda is clearly
marketing hybrids on a much larger scale. Cost subsidies are highly proprietary
information – but the grapevine now says that subsidies may be near zero
now, and this seems to fit Honda’s marketing behavior here. Likewise, it
is said that other companies have begun to catch up to some degree. In summary,
the biggest cost problems which made all these cars unaffordable have been
overcome.
But
what about the batteries themselves? Costs and energy density of batteries
remain an issue. The best well-established batteries would still impose a
significant cost penalty and driving range penalty, relative to today’s
gasoline cars. However, the same is true of hydrogen fuel
cell cars using any plausible extension of the well-established hydrogen
storage technologies, such as compression or liquefaction, which also impose
energy losses! Roughly speaking – if electric cars save us the energy
losses in electrolysis, in local hydrogen transportation, and in converting
hydrogen back to electricity, and if the cost and storage loss problems are
comparable, electric cars would appear to win over traditional hydrogen cars.
Except for one other problem of electric cars, which I will discuss in a
moment.
New technology developments suggest that
this picture could be changed radically in the near-term future.
Given
the facts above, the main hope for hydrogen cars to compete with electric cars
lies in the hope of new forms of hydrogen storage on-board vehicles. If
hydrogen cars and infrastructure based on existing technology have almost no
chance of making it in the marketplace, then research efforts really need to
focus on exploring radical high-risk storage options which have a solid hope of
overcoming that barrier. (Though electrolysis and fuel cells clearly play an
important role as well.) Heavy investments in more conventional hydrogen
technology may be compared to huge government investment in improving lead-acid
batteries in the 1980s.
So
far as I know (as I try to track and evaluate fast-breaking fast-changing
information), hydrogen storage based on carbon nanotubes is the only form of
hydrogen storage so far which has demonstrated good enough energy density in
solid prototype systems, and plausible performance. Professor Vijay Varadan of
Penn State University has demonstrated small high-density nanotube storage
systems, aimed at present for high value added markets like small medical and
computing devices. Conventional carbon nanotube material is far too expensive
for use in volume in cars, but Varadan has leveraged advanced research in
microwave and MEMS technology to develop a relatively low cost manufacturing
process and plant. It is far too early to feel confidence that manufacturing costs
(and other issues) can be ultimately solved here, for use in cars – but
clearly this should be a high priority direction for research. In my view, this
line of work represents more than half of the real hopes for a hydrogen
economy, and it would be a crime to fail to do full justice to it. Common sense
suggests that there might be comparable high-risk high-potential options out
there, but the few I have seen any sign of (such as an effort at the Jet
Propulsion Laboratory) seem to be much further out in the tree of risks and
required breakthroughs; they merit funding, but not at the same level as the
nanotube option.
On
the other hand, it now looks as if the breakthroughs on the horizon for
batteries are even more exciting. At a recent workshop on nanotechnology and
energy, at the Woodrow Wilson Center in Washington D.C., representatives of
Solicore displayed a new solid electrolyte for batteries, exploiting
nanostructured materials, which may possibly be all we need here. Solicore
compared the new battery capabilities against the best established lithium
battery designs. In general, it sounded like a doubling of lifetime (thus
halving cost flows) and of sustainable energy density. Because exact numbers
were not given, there are still significant uncertainties here (though not as
vast as the open questions about nanotube costs!). However, it was stated that
licensing agreements had been signed for batteries large enough for use in
cars, and that Quebec Hydro and others were actively following up on commercialization.
(See www.avestor.com). Some of these
claims have not been independently verified; however, many researchers in
nanotechnology note even more promising results in more aggressive
technologies, using nanoelectrodes and the like; however, there are lifetime
issues to worry about. It is even conceivable that the ultimate car of the
future might even be a battery-heavy methanol fuel cell car, able to save money
by charging up for short trips but able to use methanol for longer ones.
In summary, the odds do
appear better than 50-50 that new
breakthroughs will increase the
advantage of electric cars over hydrogen. Again, however, we should remember
that the examples discussed here are really just the first wave of a very large
family of new technology, which may well succeed even if the first examples
should develop glitches. We certainly need to continue exploring the hydrogen
option, just in case the nanotube storage works out and the new battery does
not. It is far too early to really know. However --- if Solicore should publish
and verify good enough numbers in the near future, it may then become time to
conclude that electricity would certainly have a large role in fueling cars in
any efficient sustainable energy economy.
What about biophotomethanol? Even
if biophotomethanol should work out, at an affordable cost and a large scale of
production, it would entail losses in its use in a fuel cell car, even a bit
more than hydrogen would. It is hard to believe that biophotomethanol could
really become cheaper per Btu than mirrors and solar thermal systems using the
same amount of light. Thus if the nanotubes really work out, hydrogen solidly
dominates biophotomethanol as a long-term sustainable car fuel. If they don’t,
but if the batteries work out, electricity has all the same advantages over
biophotomethanol that I just discussed for electricity over hydrogen. If
neither batteries nor nanotubes work out, we will desperately need that
biophotomethanol as the main fuel for cars. It is possible that more advanced
biological technology could someday be used mainly to capture carbon atoms, and
that solar energy (heat and electricity, in effect) could be used to maximize
methanol production per carbon fixation.
This leaves us with two further
questions for this section: (1) what was that other disadvantage of electric
cars, and what are ITS implications?; (2) what if all three high-risk options
become problematic technically?
When electric cars were very
popular in the 1980’s, extensive studies were done of many, many
technical issues in bringing them onto the road. Now that the range and cost
issues have been resolved, we need to revisit all these issues in great detail,
to see what they imply. Certainly the success of General Motors in building a
high-performance (if expensive) electric car should be a starting point of such
a reassessment. Given that I have not studied the relevant GM documents, I do
not know whether they have in fact solved all the hitches. If so, and if
Solicore proves its case, we would be foolish not to go whole-hog towards
electric cars, in my view. From a distance, I have the impression that most of
the ordinary design problems were solved – except perhaps for one. The
one which worries me (in my ignorance) is the issue of recharge time away from home. (As this goes to press, Prof. Ziyad
Salameh of the University of Massachusetts at Lowell tells me he has
demonstrated a system for full recharge
in 12 minutes – but that there is a serious chance of
doing much better with new intelligent control approaches.)
On
ordinary work days, electric car owners could recharge their cars overnight,
using low-cost off-peak electricity (very low price if grid control is made
more efficient). They would save a lot of money, especially in a future where
oil prices are allowed to rise very high. If the new batteries give them a
driving range of 300 miles or much more, this by itself should be enough to
provide a large market segment, in an efficient economy. But what happens when
people try to drive much more than 300 miles, on a long trip? How quickly can
their car be recharged away from home? Considerable research has been done on
options like fast-recharge electronics (some claim recharge in 5 minutes), or
gas stations which swap battery parts or whole batteries. Perhaps this is a
completely solved problem. Or perhaps not. Probably the special characteristics
of the Solicore battery would need to be studied, to see how it lasts when
subject to various types of fast recharge. Perhaps the neural network battery
modeling and optimal scheduling used on other batteries may be relevant here
– requiring new research to really nail down the new option. Or perhaps,
if worse comes to worst, we will need to think about battery-heavy hybrid fuel
cell cars, able to recharge overnight on normal days, but able to use methanol
or hydrogen bought at gas stations when one is taking long trips. (In such a
hybrid design, conventional hydrogen storage would “bust the
budget” for such an auxiliary system.) Such battery-heavy hybrids may
also solve some glitches in traditional fuel-cell cars, such as problems in
start-up time.
Finally
– what if all three high-risk technologies fail – breakthrough
batteries, nanotubes, and biophotomethanol?
If
we had no other choices, and we had to zero out net CO2, probably we
would simply have to swallow the existing best electric car technology, warts
and all. After all, carbon-based fuels would be out, and we would be back to
today’s battery versus cryochamber tradeoff. But on paper, at least,
ammonia provides a strange but highly reliable alternative, allowing the kind
of driving range and refuelability consumers demand today. Grimes and Kordesch
(see Kordesch’s book on fuel cells) and collaborators have built
functioning ammonia cars and other vehicles. Ammonia is feared for safety
reasons – but all high-energy-density materials present some degree of
safety issues. The world has shown that it is willing to accept the risks of
gasoline, to achieve high performing cars, and there is every reason to believe
it might be similar for ammonia, if the alternatives do not work out. Ammonia
can be reformed to hydrogen on board a car even more easily than methanol, and
it contains no hydrogen to emit. We certainly know how to make it on a large
scale from air and energy, as demonstrated in innumerable ammonia plants around
the world. It is a liquid fuel. Nevertheless, there would be major costs in
transition to ammonia, and major delays; we may hope that the new breakthroughs
in nanotechnology will eliminate the need for such a backup. At the same time,
perhaps we should work on the long-lag elements of a transition to ammonia,
just in case it becomes necessary.
(During
Millennium Project discussions in 2003, I reconsidered this. If cars and trucks
account for about 1/3 of CO2 emissions today, and if use of new advanced types
of methanol fuel cell cars would reduce CO2 per mile by a factor of 3, we could
reduce CO2 by a factor of 9, and not have to worry about problems in using
methanol for a very, very long time.)
In
a similar vein, STM power has argued that thermal batteries could be used for
storage on-board cars, combined with their new external Stirling-derived
engines. The latest STM designs for mass-produced inexpensive engines would
only have about 40 percent engine efficiency – a bit better than the
real-world whole systems
performance of PEM fuel cells, but far inferior to electric motors. They claim
that thermal engines built so far have shown about 20 percent energy losses in
storage, but an energy density far better than batteries and adequate to the
wants of today’s car drivers. They also claim that STM efficiency could
be raised to as high as 60 percent (like a good fuel cell), with further
research into improved materials, and that thermal batteries could also be
improved. On balance, it may be about as good as ammonia. It is too early to
predict how good it could become. More research is clearly called for,
especially since STM systems have important potential in other sectors of a
sustainable energy economy.
In
summary, it is too early to pick a “winner” between the five
options here for a zero-net-CO2 car fuel. All five show serious
promise, but merit far more well-focused research than they are receiving
today, despite the huge investments in energy technologies which are less
relevant to the goal of long-term sustainability. All five would have the
secondary benefit of freeing us from dependence on fossil fuels (and OPEC) for car
fuels; that in turn would have truly enormous benefits in helping us avoid
global wars. The most critical need is for expanded efforts (in money, support,
attention and other resources) for critical emerging technologies now being
funded at a very tiny scale, such as nanomaterial batteries, manufacturing of
nanotube hydrogen storage, dynamic stochastic OPF grid control, intelligent
battery management and recharge, biophotomethanol, and improvements in thermal
batteries and biophotomethanol. Large scale industry-government cooperation in
developing contingency plans and incentives for deployment will also be
critical.
The
real-world issues connecting today’s energy economy to the long-term
future of car fuel are far more complex than the end point itself. They are so
complex that many people would naturally recoil into a shell of robotic,
procedural behavior or ideological thinking, because it is not so easy to think
about these complex life-and-death problems in a complex, realistic way. That
makes it all the more important that those of us who are firmly committed to
staying alive focus on the larger context, and think harder about what can be
done.
To
begin with, of course, comes the question: “Is this really a life and
death issue? And if so, on what time scale?”
The
importance of CO2 itself remains a matter for debate. Many believe
that continued rises in CO2 would indeed inundate some areas within a
hundred miles of the ocean, and dry up (or flood) some farmlands, causing on
the order of a trillion or a few trillion dollars worth of damage over a
century – but on the scale of a century, humanity could survive that. A
carbon tax large enough to cut CO2 emission in half under present
technology might cause more economic damage than that. (Circa 1980, I urged the
EIA to include a carbon tax scenario in the Annual Energy Outlook; the results
were not so encouraging for the idea, but they were very well grounded in deep
EIA-wide analysis of the real-world drivers of energy demand and economic
growth. If anything, they were on the optimistic side, relative to later
analysis.) Nevertheless, industry needs to be realistic about the seriousness
of the public and political problems that could result if an efficient way to
lower CO2 is not pursued. We can develop alternatives to present
technology and trends. We do not have to limit ourselves to the carbon tax
idea.
But
it’s not just CO2 at stake here. It’s also the
availability and price of car fuel itself.
There
was an important international conference on energy and sustainability, jointly
assisted by the energy industry (particularly PEMEX) and by the Millennium
Project of the United Nations University, led by Prof. Oscar Soria-Nicastro at
the University of Carmen (UNACAR) in Mexico in June, 2003. (See
prospectives21carmen.org.mx for details and presentations.) The presentations
included an extremely impressive presentation by Ismail Al-Shatti of the Persian-Arabian
Gulf institute for future and strategic studies, which has access to the very
best information across all sources and viewpoints in that area.
Al Shatti weaves together a number of complex strands of
politics and economics which are sometimes hard to integrate when viewed
unsympathetically from a distance in the West. The message is not one that many
of us would enjoy hearing – but we should be grateful to the messenger
for helping us come to terms with reality. Part of the reality is that the whole
world risks paying a severe and unbearable price if we do not cut heavily into
our dependency on oil in 20 years time. We need to fully understand and
assimilate this message, including those challenging parts which are beyond the
scope of this paper.
Above all, we
urgently need to understand more clearly how severe the timing problems are
here. Since Al-Shatti already accounts for phenomena like hybrid cars and
undiscovered oil, we would need much more to change the situation in 2025 by
much. (Similar or equivalent bleak forecasts come from other sources like
Cavallo of the US Department of Homeland Security, former DOE, using USGS
estimate of undiscovered recoverable oil, and from IEA, Shell, etc.) However,
because cars stay on the orad for an average of 15 years, we would need
(roughly) all new cars sold in 2010 to be gasoline-independent in order for
half of the cars on the road to be that way by 2025! It is not realistic to
imagine all the new cars being electric, hydrogen or dediucated natural gas vehicles
by 2010!!! Our situation is grave indeed, and it leaves us with few safe
choices.
Many
people have assumed that the “chicken-and-egg” problem of providing
a new fuel and providing cars that use that fuel is the biggest difficulty in
moving towards a world of sustainable car-fuel. But the conflicts of political and
economic and military interests (and the resulting biases and misperceptions
and conflicts of opinion) may actually be a bigger challenge.
These
conflicts are far too complex to treat exhaustively here, but there is one key
variable we can start out with – the world oil price.
Some
economists believe that the world oil price today is highly irrational and
“subsidized.” According to free market economic theory, it is
highly irrational to price a limited scarce resource at the marginal cost of
production; it is important to add in the scarcity rent. When future conditions
are unknown, the scarcity value should be augmented to reflect the
“insurance value” of having the resource available in the ground.
Political distortions and political distortions of interest rates (interest
rates which are often set at high levels in order to allow governments to
borrow more money) have a heavy effect on the price.
This
view is highly debatable in the West, though I for one tend to agree. Within
the OPEC nations, however, people would extraopolate all this much, much
further than I would. One of the deepest analyses of Al Queida (Through The
Enemy’s Eyes, by Anonymous) reports how deep and pervasive are the
feelings that the West is stealing Islamic oil at unfair garage-sale prices,
enforced by military means and corrupt regimes. Yes, there is some extreme
misperception there – but we in the industrialized world need to come to
terms with our own biases and wishful thinking; we need to plan for a future
which fully accounts for the realities of the Middle East. For ten or even
twenty years, petroleum diversification will limit the pressure on us from OPEC
– but we may need all twenty of those years, in order to minimize the
probability of an unsustainable collision. Bear in mind that this discussion
refers to the possibility of problems with the Middle East becoming far more
difficult than they are today.
No matter how hard we work on global dialogue and peacekeeping,
the chances of preventing greater global conflict will be very small if we all
turn into hungry rats fighting for the last piece of cheese; I would not want
to be one of tose rats – and I wouldn’t want to be the cheese
either!
A
traditional economist might well respond by saying: “OK, let’s just
cool down, and let the market solve all these problems. Let’s just get
rid of the distortions, and let the price of crude oil triple or so. That will create
all the incentives you would need to bring the new car fuel technologies
online. As it rolls in wealth, the MidEast will be happy too.”
An
alternative approach is to admit that oil users and oil producers both face
huge uncertainties and grave worries here – and that both sides could
gain from a “global compact” that shares risk on both sides, by
something like a treaty that: (1) no one will sell crude oil below $30/barrel
(in 2004 dollars) ever again; (2) we will all work together as hard as we can
to prevent the kind of scenario that Al-Shatti has depicted.
Many economists – including
economists in the energy industry – were very optimistic about the more
traditional approach, back around 1979-1980, when oil prices were indeed
allowed to rise substantially. But it did not work out so well. In the real
world, high oil prices led to major
(and perhaps unavoidable) political problems in the industrialized world. They
led to massive economic impacts, not just direct but indirect. (Unfortunately,
many econometric models were not well-designed to capture all the effects. The
Wharton Annual Model did capture the gross short-term indirect effects, and the
Dale Jorgensen model captured some of the investment effects, but there was no
major model which captured both. There were also some industrial sectoral impacts
observed by myself (Energy paper) and Marlay (Science).) They led to clumsy
efforts at political fixes of all kinds which may have made transition to
sustainability harder. Clever oil ministers in Saudi Arabia and elsewhere soon
realized that it would do no good to just kill the customer, whatever economic
theory might say; deep world depression and lack of growth would also cut back
on oil demand and cause other problems.
In
my own talk at the Carmen conference, I proposed the following approach to the
OPEC world: If we can’t have a true free-market price, let us start
thinking (conceptually, not legally!) about a kind of “two tier”
pricing approach. Let us try to work toward a proper, higher price for world
oil – while working in parallel to allow the customer to survive and bear
that higher price, by having more and more availability of an alternative way
to power cars, especially for ordinary people who desperately need to get to
work every day. A kind of a balance. (Technically, the idea is to exploit and
foster segmentation of the markets now served by oil. Some segments can afford
to pay more than others. Those who would pay more should bear in mind that they
do benefit from having oil available to them further into the future.) In such
a regime, many other producers might aim to sustain the same level of
short-term revenue by equalizing price rises and production cutbacks, thereby
being able to continue oil production further and further into the future
without a loss in revenue. Even the most intense efforts to increase
alternative fuels would be barely enough to maintain this kind of dynamic
balance, as oil prices try to rise in the years ahead. It would be nice if we
could all work together on this. It would make particular sense to accelerate
the switch to alternative fuels in developing economies where the ability to
pay for oil is limited, and the damage to pre-existing interests would be less.
I would regard this as a kind of “Pareto optimal solution,” where
there is a bigger economic “pie” and therefore everyone can have a
bigger slice, if we can begin to share this kind of vision.
But:
how do we translate all this into technical market reality? What about those
chicken and egg problems, and so on?
The
technical transition will depend on many parallel actions by many different
actors. Since there is no special order dictated by logic, I will begin by
discussing the role of the coal and gas industries, major players which I have
not even mentioned by name so far.
No
one proposes to carry coal on-board a car, for many reasons. But coal will play
a critical role for years to come in the transition to alternative car fuels.
For example, holding down or lowering the price of electricity will be crucial
to the market prospects of electric cars, or
of hydrogen cars based on hydrogen from electrolysis. If we could make
greater use of coal to provide electricity to places like California, we could
reduce the true cost of electricity very substantially, perhaps as soon as one
year in the future if there were not substantial nonmarket obstacles to serving
the needs of consumers in those areas. (The price now being paid by California
consumers is much less than the true cost, because of massive subsidies paid by
the state of California – subsidies which are playing a role in the
ongoing bankrupting of the state budget.) NSF and the Electric Power Research
Institute (EPRI) held a workshop in Palo Alto in October 2001 to evaluate these
problems and possibilities, and suggest concrete possible solutions. Preliminary
information is available on the web site of Prof. Chen-Ching Liu of the
University of Washington. An analysis of the major conclusions, with some
additional information from a follow-on workshop co-sponsored by Entergy and
from the Brazilian power industry, is expected in late summer or early fall
2003, under the leadership of my new NSF colleague Dr. James Momoh. Research on
clean coal and on removing CO2 from smokestacks sponsored by DOE
complements these efforts in an important way.
Coal
can also be used to produce hydrogen directly, as can nuclear power, but this
does not sound so exciting, after the discussion of the previous section. Many
of the proposed technologies are very close to the famous synfuels projects of
the defunct Carter initiative on synfuels. The termination of that program may
have been influenced in part by an analysis of cost growth in synfuels
technologies by Ed Merrow of the RAND Corporation, which I initiated and funded
at EIA.
And
finally, coal can be used in methanol production. Of all the synfuels
technologies we evaluated in the 1980’s (not counting tar sands as
synfuels), the most exciting by far was the Texaco Cool Water technology. Most
synfuels technologies were a disaster in terms of NOx emissions, but Texaco avoided
that by using high-pressure oxygen in place of air. Even more impressive, they
actually had plants up and running close to market-acceptable parameters. As
best I recall, the most interesting version (economically)involved
CO-PRODUCTION of electricity and methanol. Not only could they produce
electricity at market prices; they could then legitimately price their methanol
at a level even lower than the low world market price at the time. For nations
like China, blessed with huge stretches of abundant coal and serious problems
in fueling a growing fleet of cars, this kind of technology merits more
attention.
(Note in press: at a recent DOE-sponsored workshop discussed
at www.agci.org,
it was agreed that CO2 sequestration from this specific
type of coal-fired plant looks far more promising than any other route to
CO2 sequestration. This may be our best hope for truly massive near-term
reductions in CO2, above and beyond the changes in transportation proposed
here.)
The
production of methanol and electricity as coproducts from Texaco or Tennessee
Eastman types of gasifiers has two big advantages over their use to make
electricity: (1) because of the thermodynamics (matching heat and free energy), coproducts can achieve higher
theoretical efficiency; and (2) by increasing the fraction of methanol at
night, and increasing the frcation of electricity in the day, one can use these
plants to track loads on the electric power grid – a crucial issue in the
economics of electric power.
This leads naturally into the
issues of how to get that methanol actually used in cars – but first I
promised to say something about natural gas.
Advocates
for natural gas have often worked hard to push the idea of using natural gas as
a fuel in cars. In the past, they have mainly advocated putting gas canisters
into a car, and piping the gas into an ordinary internal combustion engine.
They have promoted this as a here-and-now opportunity to reduce dependence on
oil. They have driven around real cars based on this approach, and have even
arranged sales of conversion kits for conventional cars. This never got so far
in the near-term marketplace. In the long-term, it is clearly not so efficient
as using the natural gas to make methanol, and using the methanol in a
well-designed fuel cell vehicle.
But
what about using natural gas in a fuel cell vehicle? Many researchers, such as
a group at Arthur D. Little, have developed fuel processors to convert natural
gas to hydrogen on-board a car, to permit the use of a PEM fuel cell. But these
fuel processors were extremely bulky, and they inherently use more energy than
the simple, small steam reformers which can process methanol or ammonia.
Likewise, gaseous storage presents issues with natural gas similar to the
issues with hydrogen.
Nevertheless,
though natural gas does not belong on the “A” list of options
today, one could justify a certain stream of aggressive high-risk
high-potential research to try to get it there. It is possible that intelligent
control could make it possible to shrink the size of natural gas fuel
processors. More important – there are high-risk options for fuel
conversion, using approaches radically different from partial oxidation and the
like, which might have hope of improving efficiency. And perhaps the same
carbon nanotubes which Varadan is using for hydrogen might be applicable to
natural gas as well. Or not.
If all of this should work out
someday, it would simply allow us to use primary natural sources of natural gas
efficiently and directly in a car – but it would not address the issue of
CO2 production, and it would lead us into debates about how long the
natural gas would last. Certainly natural gas is a premium fuel, like oil, and
its price has begun rising again in reflection of its value. There are many
debates about the possibility of
huge sources (larger than the world’s coal supply) of natural gas deep
below the oceans, but there is no debate about the fact that natural gas from
such sources would be very expensive. Again, it is a high-risk option worth
exploring, but the risks and costs are high enough that we need to aggressively
pursue other approaches as well. Including options that would zero out the net
CO2.
The
discussion of coal and gas leads naturally to a key question: what can we do now to open the doors as soon as
possible to a greater use of electric, hydrogen and methanol fuel cell cars?
If the new breakthroughs in batteries
work out, the transition might be easier and faster for electrics than for the others.
General Motors already developed an interesting marketing plan for its electric car a few years ago, and it
did not depend heavily on new government actions. There was clearly a big
market out there already for such cars, based on cars recharged from home
electricity. If the new batteries allowed GM and others to sell such cars with
a cost subsidy going to zero, there is hope for a true free market transition.
More precisely, there is hope of a market growing fast enough that the next
small steps would be relatively easy. Once the early adopter market is
saturated enough that supply of electric cars starts to exceed demand, it would
not cost the government too much money to ask for rapid recharge stations at
key points along interstate highways; that by itself would take care of the
problem of people trying to travel long distances, if rapid recharge technology is ready to go at that time. In the
meantime, the main role for the government (other than supporting related
research) would be to work harder to improve efficiency in electric power
supply in general. The key short-term issues there have already been discussed.
Fuel
cells – hydrogen or methanol – present a trickier transition
problem. No one will buy hydrogen or methanol fuel cell cars in quantity until
the fuel is widely available. But the fuel will both become widely available
until there is a market. There are some heavy-handed government-driven
possibilities available, such as mandating hydrogen use by large fleets and
ordering them to make their gas tanks available to the public as well; however,
such approaches are both slow and desperate, because of how completely they
short-circuit the role of the market. More market-friendly approaches would be:
(1) efforts to develop fuel cells which could even survive the use of
hydrocarbon fuels, like solid oxide cells or alkaline fuel cells with new
wrinkles (related to the earlier work of Grimes and Kordesch and others); (2)
efforts to expand methanol availability without waiting for fuel
cells. In my view, both approaches should be pursued in parallel – though
the second seems more certain to be workable.
The
use of hydrogen from electrolysis in an internal combustion engine does not
seem realistic at all as a technology able to compete in the marketplace in the
coming decade. But the availability of methanol really could be expanded
dramatically, which would set the stage for a real-world market for fuel cell
vehicles. That in turn is the fastest way to get fuel cell vehicles on the road
in massive quantities, with a real market. Even if hydrogen should turn out to
be better than biophotomethanol in the long term, we could GET to that
long-term a lot sooner if we minimized the delay between now and the time when
millions of fuel cell cars are on the road – even if those cars use
methanol in the tank.
There
is a major role for reasonable, market-friendly industry-governments
partnerships in making methanol more widely available in gas stations, for use
in internal combustion cars. For example, huge amounts of remote natural gas
are being totally wasted – vented and flared – around the world,
when the gas is an unintended byproduct of oil production. There is new
technology for collecting that kind of gas at lower cost than before, and
well-established technology for converting it to methanol. Groups like the
World Bank could even set a priority on investments to capture this wasted gas
as methanol, and get it into a distribution system.
Investments
in such methanol production have been limited by the very low price of methanol
on world markets. Such low prices do not provide a strong incentive in the
short-term to additional production – but since methanol competes well
with gasoline in price per Btu, one may ask why methanol has not already penetrated
the car market more. This is especially true, since methanol is being used in
very high-performance cars, like the Indy 500 cars, and many car drivers would
actually pay a bit extra for high performance. The problem comes down to
chicken and egg.
Back
in the 1980s, experts like Roberta Nichols of Ford (now at UC Irvine) and Grey
of EPA Ann Arbor, looked closely at the issues in using methanol as fuel in
conventional, internal combustion cars.
It is very unfortunate that their early work (then supported by people
like Boyden Gary, General Counsel to The first President Bush) has been almost
forgotten by key policy-makers in the wake of enthusiasm for fuel cells. In
order to get to the fuel cells, we
would first need to pave the road with these less exalted bricks.
In
the 1980’s, serious analysts generally got past some of the
lobbyists’ red herring issues about methanol safety and so on. Nichols
estimated at the time that it would take $300 extra per car to make cars which
are fully dual-fired, methanol-gasoline. A good part of that cost was the cost
of gas tanks made from stronger materials, like stainless steel, able to resist
methanol corrosion of the older materials. But in the meantime, new materials
have been developed. From the auto grapevine, I hear that we have newer
materials available today, which would just about zero out the premium for fuel
flexibility in the gas tank and hoses and such. On the engine side, we now have
new intelligent control designs and chips for vehicles (extremely useful for
NOx reduction in my opinion, even without a fuel-choice driver) which would
reduce the premium on that side. (NSF funded an SBIR project under Raoul Tawel
of Mosaix LLC and the Jet Propulsion Laboratory which could be very useful
here, for example; some additional details are in my chapter in J. Si et al,
eds, Handbook of Learning and Approximate Dynamic Programming,
forthcoming.) A more aggressive
pursuit of these technologies, with some degree of government partnership,
should allow widespread production of truly multifuel cars, with benefits to
clean air, and relatively little cost premium. We could also build upon the
learning experience from California in 1997, when the California Energy
Commission and Ford partnered in an experiment which sold thousands of
methanol-capable Taurus cars at no cost markup.
For
practical political reasons, it would be essential for such vehicles to be
adaptively able to handle methanol, ethanol, and various grades of gasoline. It
would require government intervention to really push hard for such flexibility
in all cars – but the result would be more of an open playing field in
fuels in general. It would be essential, however, to maintain a higher level of
technical integrity here than is customary in large lawyer-advocacy-style
government programs.
Given
the low wholesale price of methanol, and a growing number of such flexible
cars, it should also require
relatively force to make methanol available in a growing number of gas stations
– especially in “extra pumps” where leaded gasoline is being
still phased out in many parts of the world. Quality fuel tanks would be needed
in gas stations, as in the cars themselves… but this is needed in order
to prevent gasoline seepage into water supplies in any case!
Methanol
does have a lower energy density than gasoline. I would envision a scenario in
which the owner of a flexible car
would get to decide, every time he or she
fills up, which fuel to choose today.
“Should I spend 75cents per gallon of methanol and be
able to drive 200 miles before my next refill? Or do I spend $3/gallon on
gasoline or ethanol, but be able to drive 400 miles?” Perhaps he/she
would choose methanol on most days, and save the gasoline for a day before a
big trip. And perhaps poor people would use more methanol, while rich people
would stick with gasoline or ethanol.
Many of the
technologies discussed above could go ahead based on the usual mix of private
sector investment and government support for high-risk research. However,
because the situation is very urgent here, and because the world economy is not
moving fast enough yet to deal with it, I personally would advocate two urgent
matters of legislation: (1) that all cars placed into service after year X
which are capable of using gasoline as a fuel must also be capable of carrying
and using ethanol or methanol safely in the same gas tank; and (2) all
incentives, subsidies and research opportunities which exist for biohydrogen or
bioethanol should be extended to biomethanol. Year X would
be two years from date of passage in most nations – but the US has
elaborate certification rules which should either be relaxed for two years or
used to justify a delay of four years.
Notice that such a law would still
give consumers and producers the full freedom to choose, say, dedicated natural
gas cars instead – but it is clear that retooling the materials used in
gas tanks can be done in two years, while a heavy conversion to natural gas
could probably not be so quick. The net effect is to slightly encourage natural
gas, electric or hydrogen cars – but to ensure that all cars with
gasoline tanks have stronger tanks, and that all cars have a way to operate
without gasoline. The law is asymmetric, because tanks which usefully hold
gasoline cannot also hold natural gas in useful amounts.
In
summary, natural gas (especially remote gas) can play a key role here as a
premium fuel, as a source of methanol, to displace oil, even in the next ten
years. The potential is there for a very large displacement, even within ten years,
if we really pay attention and take this seriously. The easier, lower-value
task of generating electricity can then be shifted to other fuels, as will be
discussed in the final section of this paper.
As
for hydrogen – the methanol effort and hydrogen technology development
would provide the best opportunities for near-term work. Transitions to ammonia
or thermal batteries would be more difficult, and I have no immediate
suggestions other than continued work on the underlying technologies. It is premature
to consider government requirements for ammonia use (except perhaps in some
special military applications, which the Army has researched) at a time when a
more market-friendly approach based on electricity, methanol or hydrogen seems
quite promising.
Even after we figure out how to power our cars and trucks,
global energy sustainability requires that we ask one more question:
How can we find enough total
primary energy to make the fuels we use in our cars, and to power the rest of
the world economy, in a sustainable way, without violating the environmental
and national security constraints?
Once again, there are many desirable technologies that can
provide 2 percent here or 5 percent there, but even taken together cannot put
us all on safe ground, in being able to meet total world demand when oil and
gas drop off. Richard Smalley, the discoverer of the carbon nanotube, has
conveyed this picture very elegantly and persuasively this year – but his
conclusions are essentially the same as what other serious analysts have
concluded. (Cite: Hoffert et al, Nov. 1 2002 Science; the same initial
citations
in the beginning here; the Shell future scenarios discussed
at prospectivas21carmen.org.mx.)
There are only four technologies which definitely could
provide enough total energy to meet the world’s needs for decades and
decades, as conventional oil and gas start to run out:
(1)
coal
(2)
earth-based solar power
(3)
space solar power (SSP)
(4)
nuclear
In effect, this is like the “A” list of five car
fuels discussed earlier. As discussed earlier, esoteric natural gas also
provides a reasonably plausible high-risk alternative, and biophotomethanol has
some potential – but it seems that unlikely biophotomethanol could ever
compete on cost with earth-based solar in producing electricity, which will
almost certainly be the most important carrier for energy uses outside of
transportation. There are other
ideas for more radical technology which might or might belong on the list in
the future. But to maximize our chances of achieving overall sustainability in
primary energy, we need to focus on this list.
Coal
will probably dominate the primary energy supply for many years, as oil and gas
ramp down. But sooner or later, coal supply will become more of an issue
worldwide, and prices will eventually rise. Furthermore, there are large parts
of the world where coal is not so plentiful or accessible, and they will need
to make use of one of the other energy sources. The national security
constraints suggest that it would be very dangerous to have all these regions
rely entirely on nuclear. Therefore, sustainability on a global basis will
depend heavily on our ability to our ability to make earth-based solar or SSP
affordable. If we can find a way to make earth-based solar or SSP cost as
little as nuclear electricity in areas like the Middle East, the benefits to
human security will be enormous. If we can make them cheap enough, there is
even a hope of a major economic boon to the US as well.
In 2001, the Millennium Project of
the United Nations system (http://millennium-project.org) asked policy makers
and science policy makers all over the world: "What challenges can science
pursue whose resolution would significantly improve the human condition?"
The leading response was: "Commercial availability of a cheap, efficient,
environmentally benign non-nuclear fission and non-fossil fuel means of
generating base-load electricity, competitive in price with today's fossil
fuels."
All of this leads up to the
following question: how can we maximize the probability that earth solar and/or
SSP will become cost-competitive with coal and nuclear? What are our chances of
success in either or both alternative?
Maximizing Earth Solar and Allied Technologies.
For large-scale earth-based solar
power, there are two well-known alternatives – solar farms based on solar
thermal technology, and solar farms based on photovoltaic chips (PV). As
discussed earlier, the best known way to generate solar thermal power is the
Sandia design, using mirrors to generate intense heat, STM engines to convert
heat to torque, and turbogenerators to convert torque to electricity. STM claims that the resulting
electricity cost is much less than that of PV farms – but of course we
need to continue to work on both streams of technology. A new
successor company to STM, Lennart Johannson global associates, has worked with
members of the Millennium Project to follow up these possibilities on a private
sector basis. As noted at http://smalley.rice.edu,
a small fraction of the earth’s deserts would be enough, in principle, to
meet all the world’s energy needs, using ground-based solar power; the
advanced SAIC+STM design can provide the lowest available costs for that
purpose, particularly in developing nations (where the labor cost of assembling
mirrors is far less than in the US.).
People who try to market earth
solar systems, distributed generation and wind power generally agree that these
technologies already make sense in many market segments, much larger than
existing sales. They generally say that the major barriers to greater use of
the technology are two-fold: (1) difficulty in getting hookups to the electric
power grid and good payment for electricity they sell to the grid; (2)
nation-wide zoning requirements which require all distributed generators to obtain zoning as major power
producers, before they are allowed to sell any electricity at all to the grid.
Professor Lester Lave of Carnegie-Mellon has done some interesting research on
the latter problem.
The first problem is actually a
reasonable market response to the needs of electric power grids managed by old
control methods, which are not adaptive enough to fully capture the benefits of
distributed generation entering the grid. Dynamic Stochastic OPF (discussed
previously) is theoretically the optimal way to overcome these problems. The
challenges in DSOPF are not in discovering new fundamental technology, but in
building up software to implement new technology already discovered and teams
of engineers capable of following through. New transmission hardware will also
play a crucial role (as in the 2001 NSF-EPRI workshop discussed earlier), but
electric power investments based on older control paradigms might actually
delay the paradigm shift which is needed here. Dr. James Momoh of the ECS
Division of NSF has for two years funded (joint with the Navy and the Social,
Behavioral and Economics division of NSF) a special initiative called
“EPNES,” which tries to foster the new types of cross-disciplinary
partnerships needed to solve these kinds of problems – but for two years
money has been sent back, because of the lack of really vigorous truly
cross-disciplinary partnerships. The cultural problems are larger barriers than
the objective technological difficulties. But we need to keep on working on
these problems, ever more vigorously, building especially heavily on those
groups which really are trying to move seriously in new directions.
With regard to zoning – it
is my opinion that someone at a national level really should be able to change
or bypass the relevant national zoning rules. Instead of requiring that all distributed generators (DG) must be
zoned as full-fledged large-scale power plants, there should be a blanket
exception for all DG which only use alcohol or renewable energy sources when
generating electricity for sale to the grid. This would allow companies like
Intel, in California, to afford to pay
for DG as a backup to the gird, by amortizing the cost against generation
they sell to the grid in normal times.
Solar thermal technology could
also be enhanced and accelerated by the development of new markets and
partnerships which further the STM component. For example, sources of biomass
like wood and sugar cane will never be large enough to meet more tan a small
percentage (less than 10 percent) of world needs. But it is a shame to waste
half or more of their energy content, by converting them to ethanol or methanol
fuel, as is very popular in large parts of the world today. By burning them to
produce heat in high-efficiency furnaces, and converting that heat to
electricity via STM and turbogenerators, one could arrive at an important new
near-term source of electricity, particularly in countries heavily blessed with
timber or sugar cane. Wood furnace technology has been developed very far in
Sweden, and sugar cane technology in Brazil. This could also provide an
alternative in those countries to the burning of natural gas to produce
electricity, allowing that fuel to be better used in producing methanol for
cars. All of this would only be a steppingstone to the use of solar thermal
technology, but it could be very useful in facilitating a transition to solar
thermal.
Maximizing Space Solar Power and Allied Technologies.
The average solar flux per square meter in geosynchronous
orbit is at least an order of magnitude larger than the flux in the most
promising deserts on the surface of the earth. Therefore, it would be
irrational not to fully explore what the possibilities might be to exploit this
enormous resource.
Circa
1978-79, NASA and DOE were funded to develop a reference design for an SSP
system, and to evaluate it in depth. NASA projected a cost of electricity of
5.5 cents per kilowatt-hour at that time, but the DOE report was far more
skeptical. Dr. Fred Koomanoff, a Division Director at DOE Basic Energy Sciences
led the DOE evaluation. He also enlisted the help of several technical experts,
including myself from EIA.
Several
active lobby groups pushed very hard (and successfully) to discontinue SSP
funding at that time. They cited the DOE evaluation as an argument for
discontinuing research in that area. Many space advocates perceived the DOE
group as enemies on space solar power and space in general. But all of this was
a matter of gross distortion, motivated by the belief of many advocates that
they have a duty to be “effective lawyers for the cause,” stretching
the truth to win support for their side.
In
actuality, neither Koomanoff nor I intended to take a position for or against
SSP at that time. To the extent that SSP might
be viable, we felt it was critical to identify and understand the critical
obstacles as early as possible. No new technology benefits from having so much
biased leadership that the key problems are addressed only after billions of
dollars are wasted on overly simplistic blind alleys. Many major government
investments have suffered from this kind of problem.
Several
years later, NASA was funded to take a fresh look at SSP. Under the leadership
of John Mankins, intensive studies were carried out which identified even more
problems with the original reference design – and also sugggested
possible solutions.
In
1990, NSF and NASA held a joint workshop on the possibilities for using
computational intelligence to reduce the costs of building SSP systems. Inputs
were obtained from many creative thinkers, such as Fred Koomanoff, Prof. Toshio
Fukuda of Nagoya University (arguably one of the world’s two top experts
in intelligent robotics, with ties to Honda), Rhett Whittaker of
Carnegie-Mellon, and Criswell and
Ignatiev (major champions of the use of extraterrestrial materials). This
workshop, plus subsequent discussions on microwave power beaming, led to a new
joint solicitation in March 2002, in which the Electric Power Research
Institute (EPRI) also joined as a funding sponsor. By searching on JIETSSP at www.nsf.gov, one may easily locate the program
announcement, which summarizes the major technical themes and provides web
sources for extensive supporting literature.
Mankins
and I served as joint co-chairs of the JIETSSP working group. NSF and NASA made
equal contributions.
The
announcement generated 99 proposals. After very intense and tough peer review,
the panels recommended $21 million worth of research – but only $3
million was available. Thirteen proposals were awarded – but I subsequent
months we learned that some of the highly recommended proposals we could not
fund were of enormous importance, and I for one deeply hope we can make up for
the neglect of much critical work in this area.
SSP
is still a very high-risk area of technology (though not so risky as “the
hydrogen economy,” in my view). We have good ideas of what needs to be
done next, but we have to make guesses (as in wildcat drilling) and we need to
be very adaptive in setting priorities and considering new ideas. But we need to
remember that the challenge to policy-makers is to minimize the risk that the
world does not have an affordable
source of baseload solar power in time to prevent life-threatening problems.
The risk that really matters is the risk that would be greatest if we did not
explore both of the promising
large-scale options for solar power. We have no guarantees of success –
but if in the end we do not succeed, nothing else will guarantee our survival
either. In the “review criteria” letter for this initiative, we
asked the reviewers to consider the risk that we might lose something critical
– and delay the market-competitiveness of SSP – if we did not fund any particular project.
In
my personal opinion (this week),
there are three types of technologies which have reasonably good hopes for
delivering electricity at prices competitive with coal and with conventional
nuclear. Two of them could theoretically be cost-competitive as soon as ten
years from now, if we were lucky and if political difficulties did not get in
the way. The three are: (1) a “conventional” design augmented by
nanotechnology; (2) a novel solar-nuclear design, spelled out for the first
time (other than emails) in this paper; (3) designs based on extraterrestrial
materials.
By
“conventional” designs, I mean some of the more recent kinds of
designs evaluated in previous work on life-cycle costing funded by NASA. In the
Technical Interchange Meeting of September 2002 held at the Ohio Aerospace Institute, it
was stated that those designs still show costs of about 17 cents per kilowatt
hour, assuming earth-to-orbit costs of only $200/pound and aggressive but
realistic development of established technologies in key enabling areas. These
designs involve the usual idea of vast arrays of solar cells (typically with
intense concentrating mirrors, as in the work of Entech), followed by wires
and transformers carrying electricity to a system beaming
microwave power to earth. Of course, 17 cents per kilowatt hour is not
competitive, and that is why breakthrough-oriented research is essential.
Some
critics have questioned whether even this reference analysis is right. For
example, how sure are we that microwave power beaming can work as well as
needed? In actuality, the NSF team which generated this solicitation in the
first place started out as an even there-way partnership of three people
– myself, Dr. James Mink, and Dr, James Momoh. Dr. Mink had been
editor-in-chief for many years of the main IEEE journal on microwaves, and the
microwave engineering community make a major contribution to making this
activity possible. They believed that new research is essential – but
that the chances of success look good enough to warrant the effort. Some
serious critics worry that power beaming might interfere with wireless
communications; however, one of the funded PIs, Prof. Frank Little, predicts
that he will soon have hard empirical demonstration that his designs for
avoiding such problems really do work. Many other concerns regarding human
health and the ecology have been studied very carefully already; though further
studies are being funded, the claims of many anti-SSP lobbyist appear very
grossly misleading at this point. In my opinion.
But
how could we get below 17 cents per kilowatt hour, when even that assumes transportation
costs far below what is in sight at NASA?
One
of the proposals to JIETSSP (for which the PI has granted me permission to
reveal some private information) involved an extension of recent exciting
vehicle design work funded by the Air Force and by NSF, in other contexts.
Ramon Chase of ANSER, Inc., proposed a breakthrough design for reusable
rocket-plane transportation, exploiting little-known off-the-shelf technology
from a combination of Lockheed-Martin, Boeing proper and from the former McDonnell-Douglas.
Peer review questioned whether NASA or NSF would have the ability to manage the
vehicle development work required to follow on such a design – but it
also supported the claim that this particular new design concept is highly
unique and highly credible. A target of $200 pound for the initial vehicle, to operate within a decade, seems highly
defensible (unlike the case with earlier designs who have made similar claims
without a comparable level of detailed engineering cost support). The design also
lends itself a better than average subsequent learning curve, and a number of
dual-use benefits. In my personal opinion, it would be a criminal loss to the
world if this new and exciting option is not fully followed up on. Fortunately,
there are some highly competent and well-motivated experts in the Department of
Defense who understand what is at stake, and have a serious hope of navigating
the very, very difficult political shoals and vested interests. But the
politics is extremely complex, and it is hard to predict how it will come out
in the end. I deeply regret that random events kept us from funding the final
design study in 2002, and I hope that a way will be found before too long.
But
even that would not be enough for the conventional design to be competitive in
time for the first launch. The problem is that the present designs require a
lot of weight to be launched – about half of it for the solar cell part,
and half of it for the wires and transformers. However, at recent workshop on
nanotechnology at the Woodrow Wilson Center in Washington D.C., breakthroughs
were presented both in solar cell design and in wires and transformers, using
nanomaterials. Dr. Richard Smalley – who received the Nobel Prize for
discovering the carbon nanotube –stated that the weight of wires and
magnets (including transformers) could be reduced for sure by a factor of 6,
using such materials. Such wires need not
meet the very specialized electrical standards required in nanoelectronics, for
which the costs are relatively high; most likely, the more low cost kinds of
materials discussed by Varadan of Penn State may be relevant. At that same
workshop, physical samples were distributed of new types of bendable,
light-weight nanomaterial-based solar cells. Weight reductions of a factor of
five or more could possibly lower costs from 17 cents per kilowatt hour to
something more like one-fifth of that; even with allowances for
other, smaller cost factors, this looks as if it provides
real hope for becoming competitive with today’s base-load electricity.
At
the workshop, no one could say whether the breakthrough solar cells could
survive the harsh environment of outer space. However, it seems as if the key
titanium-based nanoparticles can be painted onto almost any clear plastic.
There are plenty of very light clear plastic film materials which are
space-rated (and part of the previous NASA SSP program).
So
that is one option.
Another
option which I thought of a few months ago, and discussed so far only by email,
is a hybrid of solar and nuclear technology. This is a riskier approach –
but if it should happen to work, the potential near-term cost reduction could
be startling.
Hydrids with nuclear technology
tend to generate automatic irrational visceral reactions from those who fear
all things nuclear. But it is important to remember that the national security
and environmental problems with conventional nuclear energy on earth are almost
entirely the result of the neutrons which fly out of those systems on earth.
There are alternatives.
Nuclear fission and conventional
deuterium-tritium fusion all produce huge amounts of neutrons. There is no
prospect at present of magnetic bottle (“tokomak”) fusion being
able to use different fuels like pure deuterium or deuterium with helium-3.
However, John Perkins of Lawrence Livermore Laboratories has developed a new,
breakthrough design for fuel pellets to be used with “inertial” or
laser-induced fusion.
This has potential advantages on earth – but still
some serous limitations. The D-D reaction
still spits out about one-third of its energy as neutrons, which leads to all
the usual problems on earth. The D-He3 reaction is hard to do in a purely
earth-based operation, since the nearest significant sources of He3 are on the
moon or (maybe, so far as I know) the asteroids. The high-powered lasers
required for laser-induced fusion are a huge cost problem on earth, because of
the need to tie up giant power sources to supply the electricity which drives
them, and the massive construction reminiscent of the old supercollider
proposal.
But
my idea: why not use Perkins’ kind of design in space? Why not use
lightweight mirrors and light-to-light lasers floating in orbit, in place of
the giant construction project on earth? Why not let the one-third neutrons
just float away into space (where they turn into protons in 12-15 minutes on
average anyway)? Instead of having a big nuclear reaction chamber, why not
simply post a couple of big magnets near the place where the light hits the
pellets and the energetic protons come out? (In other words, why not use a
lightweight MHD system to extract the electricity from the current of protons?)
A quick cost estimate, based on $200/pound to orbit and $200/pound from low
orbit to geosynchronous orbit, suggests that the cost of carrying these pellets
to orbit would add only 0.1 cents per kilowatt hour to the cost of the
electricity generated. The cost of the deuterium as such would be negligible;
“heavy water,” deuterium dioxide, is a major fraction of ordinary
seawater.
The biggest challenge here would
be the design of the laser or lasers, suitable for generating the kinds of
pulses Perkins requires. I have discussed this at length with Leo DiDomenico
and Jonathan Dowling of the Jet Propulsion Laboratory (JPL). (Dowling is a
unique world-renowned expert in experimental quantum optics and quantum
information science. DiDomenico works with him on lasers and other topics.)
They both believe that there is very real hope here. More precisely, they are
now working out the details of a new laser design which might be enough to do
the job. They describe their present status as follows. They are working on
single-strand High Power Fiber Laser (HPFL) technology. They are working on
bundling the HPFL strands to create a coherent large-aperture high
average-power system, and on direct solar pumping of the bundle and the
strands. They are using Photonic Band Gap techniques to create large mode-area
fiber lasers (already known in the literature). They are also using inflatable
mirror systems to create the etalon. They are also continuing to follow up on
our discussions of larger energy applications, like the use of lasers in fuel
creation, fusion and SSP. They are also looking at new approaches to
earth-to-orbit transportation. (In fact, we have funded Ray Chase and Princeton
to explore radical air-breathing approaches to earth-to-orbit transportation.
The “near term” vehicle design discussed above is critical as a way
to consolidate technology needed as a steppingstone to such more advanced
possibilities.)
There are no fundamental reasons
why his new option could not be implemented – especially given the unique
environment in space where large structures held together by tension are far
easier to assemble than on earth. (JIETSSP is also funding four projects in
robotics assembly. Greg Baiden of Canada and Penguin ASI has demonstrated
working “teleautonomous” robotic mining systems which prove that
this kind of complex project probably can be done with today’s robotics
technology – but additional work is needed to nail down the options and
optimize performance.) In discussions between myself and JPL – the early
cost guesstimates for electricity look extremely promising. Cooperation and
expanded efforts with Lawrence Livermore would be essential to bringing this
option to fruition.
Two further issues are important
to evaluating this option.
First, the JPL laser effort is
central – but there is good reason to believe that other laser options
should be explored as well. This paper is not the right place to elaborate on
the technical issues of laser design, but some of us do need to think about
those issues. Experience so far
suggests that we need a combination of focused efforts like an expanded
effort at JPL, in parallel with broader efforts widely open to universities and
small businesses (as in JIETSSP) to dredge for new ideas and creative people.
Second, the cost of electricity
beamed from space in this option would be mainly a matter of the capital cost
in building the laser(s). Previous work (e.g. some collaboration between myself
and Prof. Richard Fork of the University of Alabama, reported at the Technical
Interchange Meeting previously discussed) suggests that light-to-light lasers by
themselves provide an interesting option for SSP. No one knows as yet
whether they would be four times as expensive or one-fourth as expensive as
today’s conventional SSP designs, or more or less, but they generally
appear comparable. However, it is well known that a laser fusion system can
augment the power output by a factor of 100 or more. When 100 times as much
power is produced at almost the same capital cost, the cost per kilowatt hour
is reduced by a factor of 100. If it were not for Murphy’s Law, one might
seriously hope for baseload electricity at less than one cent per kilowatt hour
here. Murphy’s Law is very real in engineering – but there still is
serious hope here of providing electricity at costs significantly less than
what we all are paying today. This would not only be an economic boon, but also
a great stimulus to the kind of transition discussed in the previous section.
Given
that the cost of lifting material from earth orbit has been the main cost
driver in conventional designs, it also makes sense to pay close attention to
options which lift up most of the material from other places. These were not
included in the scope of JIETSSP, mainly because of insufficient funds to do
justice to the topic. However, Ignatiev’s work on making solar cells in
situ on the moon is extremely serious; Ignatiev has been a long-time grantee of
NSF, very knowledgeable in the manufacture of all such solid state devices.
This is a long-term back-up option, but the exploration of such options might
have other important benefits even if the other paths to SSP should work out
sooner.
Summary
and Conclusions
This paper has proposed a vision of global energy
sustainability, which may well be necessary (though not sufficient) to
long-term survival of the world economy. It requires that we focus our efforts
far more directly and effectively on four key variables: affordable,
sustainable car fuel; affordable, sustainable primary energy; CO2;
and global nuclear security.
Despite
the huge amount of talk and money and legal discussion on global sustainability,
our chances of success actually seem to boil down to an array of relatively
less expensive critical issues – few of which are receiving adequate
attention. In order to maximize our chances of survival, we need to
wake up and cooperate more effectively in addressing the
concrete opportunities and challenges which face us.
Dr. Paul J. Werbos.
Foto
Hoja de Vida.
· The views herein are certainly those of the author, not those of the US government or any agency thereof. In fact, there has been no agency review or consensus process at work here at all; it is strictly a First Amendment exercise, the kind of exercise which is essential to effective information processing in an information economy.