Friday, 1 January 2016

solar power.....

Will Solar Power’s Future be Bright?


Solar cell technology will need a fair share of research money to get going

The sun provides the earth with enough energy in one hour to satisfy a years’ worth of the world’s needs.
This readily available energy gives us a way to create electricity and heat without emitting carbon dioxide,
one of the main causes of global warming. So shouldn’t we
be making use of it more than we have been?
We have enough technology available to take advantage of
the sun’s energy. But unlike sunlight, this technology costs
money. In fact, solar energy is currently five to ten times as
expensive as energy we get from burning coal.


Solar photovoltaic cell panels inte-
grated into a back porch awning
 A solar roof-installation glints
in the sunlight
 Brazilian homes use 50 Watt photovol-
taic systems for lighting.


“To reduce that price we need to make good research and
policy decisions for solar technology right now,” says Aimee
Curtwright, a post-doctoral researcher at the Climate
Decision Making Center (CDMC). These decisions will
shape the future of solar power. To make intelligent
choices, she adds, we need to understand present-day solar
energy technology, estimate how it will advance, and then
make the best judgment we can.
Curtwright, who has a Ph.D. in electro-chemistry, is study-
ing photovoltaic (PV) cells, which are the best tools avail-
able right now for converting light energy to electricity. They are used in the shiny panels you see on roof-
tops or on roadsides, powering traffic signs. There are many types of PV cells available, made from different
materials that lead to differing performances. Researchers
are constantly studying and trying to create newer and
better designs. In fact, they’re working on a new, third
generation of PV cells currently.
Analyzing the benefits and drawbacks of these technolo-
gies tells us which ones could be improved and made
usable soon, she says. The third generation is still a while
away from being practical, she adds, but some technologies
in the second generation could become less expensive in
about a year.



One way to bring down the cost is to do more basic tech-nology research—better solar cell materials and moreefficient designs decrease costs. Other factors that

can reduce cost significantly are lower production
costs and higher production capacity. For the past
13 years, as manufacturing plants have become
larger, the cost of PV modules has gone down.
Another important thing that adds to the module’s
cost is the cost of other equipment that goes with
it, such as batteries that store electricity for the night
and inverters that convert DC electricity into AC.
And then there are some inescapable factors that
could affect the progress of solar power, including
how much money is being put into the research and
whether it’s consumers find the option attractive.
Increased research money in certain areas of PV
technology would lead to breakthroughs that create
even better, cheaper PV cells. This increases their
demand, which slashes prices even further, and
could lead to more research money.

Some second generation technologies are ready for
use in the market right now, given the right financial
incentives and some more tweaking. But others are
still immature and need a push in basic research.

Researchers don’t know how these younger nologies will shape up in the future. “We’re not
going to know exactly which new technologies there
are going to be, or exactly how much they’re going
to cost, or how much they’ll cut back carbon-
dioxide,” Curtwright says. But despite those un-
knowns, “someone needs to make an intelligent
decision right now.”
She plans to analyze the issues associated with
various technologies, get further insight from solar
power experts, and develop a portfolio of promising
PV technology. This will provide policy-makers with
data to make more informed decisions. “It should
help in allocating research money,” she says. “In
making choices between basic research in technol-
ogy that’s not going to be ready for use for thirty
years versus fine-tuning the engineering in near-term
technology.”
Where the money goes will make all the difference
in solar energy technology, Curtwright believes.
Through the CDMC, she hopes to guide the science
and technology policy decisions that will let us take
advantage of the abundant solar power available to
us, without having to worry about paying too much.




Solar Energy Facts: The earth receives more energy from the sun
in just one hour than the world uses in a
whole year.
Japan and Germany lead the world solar
market.
The biggest state market in the U.S. is
California, with New Jersey coming in second.
Solar energy costs 5-10 more than the
energy from coal.
Source: www.solarbuzz.com

Wednesday, 30 December 2015

FUTURE COMPUTER TECHNOLOGY AND ITS IMPACT



FUTURE COMPUTER TECHNOLOGY AND ITS IMPACT


The digital computer has penetrated many professional
specialties and many aspects of everyday life; conputing
technology itself is entering a new and explosive phase
of development. What does this portend for the future?
Let us consider the future impact of computers on
society, business, industry, and the military. First,
however, it is desirable to call attention to a particular
feature of the computer and to one of the ways in which
it is used. Second, I want to describe the advance in
computer hardware in the 15 years of its commercial life-
time, and the anticipated changes during the next decade.
Traditionally, we think of a computer as a device to
do arithmetic. But, as Fig. I shows, it is possible to
encode other information in terms of numeric symbols. For
example, in the telephone dialing system the letters EX
are represented by the two digits 3 and 9. This principle
allows the digital computer to accept any information that
Any views expressed in this paper are those of the
author. They should not be interpreted as reflecting the
views of The RAND Corporation or the official opinion or
policy of any of its governmental or private research
sponsors. Papers are reproduced by The RAND Corporation
as a courtesy to members of its staff.
This paper was presented to the Board of Trustees
of The RAND Corporation and the Project RAND Air Force
Advisory Group in November 1965.
can be encoded in symbolic form. Moreover, by giving the
computer special kinds of operations, we can have it
manipulate such symbolic information. It becomes an
information processing machine. As Fig. 2 indicates, the
computer can parse sentences, drawing a picture of the
sentence structure. If the symbols happen to be those of
algebra, the computer can perform algebraic manipulation.
Or, it can process symbolic pictorial information and re-
construct a picture (Fig. 3). We should think of the
digital computer as a device that can accept and process
any information encoded in symbolic form.
There is a particular way of using a computer called
modeling or simulation. Consider a bottle of soda pop
(Fig. 4) in which a few chemical radicals and compounds
are present. In the mixture itself there are oxygen,
carbon dioxide, and other chemical ions. Oxygen, nitrogen,
and carbon dioxide are exchanged across the air-liquid
interface. We can mathematically describe the chemical
activity and the energy balance in this system, and com-
pute its equilibrium state. What we have done is to model
a real-life, physical system in terms of a set of mathe-
matical relations.
A model can also be a description of a biological,
physical, economic, political, military, financial.
physiological, psychological, organizational, or any
other system. A model identifies vnriables in a system
and states the relations between them. If the variables
of the model and the relations between them satisfactorily
represent the real-life situation, then the model is a
precise description of the real-life situation. The modelwill exhibit the same behavior as that part of reality
which it simulates. It can be used to perform experiments;
it can be used to explo,. bituations or cases which may be
impossible : replity or which we hope will never happen.
Now let's briefly trace two branches of computer hard-
ware development. Figure 5 shows what switching circuits
looked like in the early 1950s. Figure 6 represents one
form of contemporary circuit technology, the so-called
solid-logic construction, containing both printed and de-
posited circuits. The large elements are resistors (Fig.
7); the small squares at the right of the figure are
transistors, 40/1000 in. square. (Transistors are fabri-
cited approximately 600 to the square inch.)
We are just entering the era of integrated circuits.
The small square in the center of Fig. 8 is the circuit
proper; the rest is mechanical packaging and external
connections. The material at the bottom is a piece of
thread. Figure 9 shows two examples of an advanced form
of integrated circuit technology. A slab of pure silicon
is successively doped and processed to form a grid of basic
circuit components. The basic background grid (Fig. 10) is
a rectangul.•r area about one-tenth of an inch long and one-
sixteenth of an inch wide. Such a rectangle contains about
40 circuit elements, of which 30 or so are transistors. By
further deposition of conducting and insulating material
over the background grid, the customized fully integrated
or microcircuit is built (Fig. 11). The postage stamp
area about half an inch by slightly over five-eighths of
an inch contains a total of about 800 transistors 350 circuit components. The packing density in this ex-
ample is about 3000 circuit elements per square inch. By
the early 70s, integrated circuit technology is expected
to produce packaging densities of 200,000 circuit elements
per square inch, an improvement over present art by a
factor of about 70.
Figure 12 shows three generations of circuit packaging.
In the background is a contemporary form of printed cir-
cuit plug-in packages: in the left foreground are commer-
cially available integrated circuit packages; and, front
center is the fully integrated microcircuit. Each repre-
sents the same electronic capability.
Let us turn our attention to the storage component of
a digital computer. The backbone of storage technology
has been and may well continue to be the magnetic core art.
Figure 13 shows a large magnetic core plane from the early
50s; the small one in the foreground is contemporary. Each
represents the same storage capacity, 4096 binary digits,
but the small modern plane is faster by a factor of 15
or so. Figure 14 illustrates the steady decrease in size
of magnetic elements used in stores over the years. The
final entry on the left is hardly visible, and has been
reproduced much enlarged at the right. The tiny X-55
annulus at the top right is magnetic material, sevein
thousandths of an inch inside diameter, twelve thcusandths
of an inch outside diameter. Such cores are fabricated
into a so-called plane with several wires threading each
core (Fig. 15).
The magnetic core store also comes in large economy
sizes; Fig. 16 shows one with a capacity of 16-binary digits. Magnetic storage also comes in other forms.
Figure 17 is a disc store with a capacity of 60-million
binary digits; and Fig. 18, an even larger store with a
capacity of about 200-million binary digits.
There are also new forms of terminals enabling men to
communicate directly with machines. Figure 19 shows a
production model of a personal console which is connected
by means of a telephone circuit to a center computer. The
entire conversation between the user and the machine is
carried on via this electric typewriter. Such personal
console stations have given rise to the so-called on-
line, time-shared computing system (Fig. 20). Many users
scattered throughout a building or over a large geograph-
ical area are connected to one or more large, central
computers by a communication system. 7o each u3er the
machine appears to be solely his; but, of course, its
enormous speed enables it to circulaZe among all users,
giving attention to each in turn.
Another new terminal is the graphical input-output
(Fig. 21), which enables a user to input any kind of
graphical or pictorial information and receive the same
kind of output. Slide material may also be projected onto
the rear of the tablet surface for convenient tracing into
the machine (Fig. 22).
Looking briefly at computers over-all, we find that
the machine of the early 1950s was typified by RAND's
recently retired 13-year-old JOHNNIAC (Fig. 23). Today's
typical machine (Fig. 24) is not too much different in
external size, but is somewhat cheaper, contains much
more hardware and eight times the storage, and is by a factor of 30 to 50. Computers now can be designed
to fly in space--e.g., the GEMINI machine (Fig. 25).
We are just beginning to see the introduction of
integrated circuits into commercial computers. The arith-
metic section of one is shown in Fig. 26; Fig. 27 shows
the storage part of the same machine. The complete ma-
chine appears in the lower left corner of Fig. 28; the
objects surrounding it are various terminal devices for
coupling the computer to its environment--a display, a
console, a typewriter, and a small magnetic tape unit.
Comparing the old and new computers of Figs. 23 and
28, we find that the 1953 machine weighed about 5000 lb,
had a volume of 300 to 400 cu ft, and required about 40
kilowatts of power. The contemporary computer is a
hundredfold lighter (about 50 ib), a thousand times smaller
(about 1/3 cu ft), and requires 250 times less power (150
watts), Moreover, it has twice the storage and runs about
ten times as fast as JOHNNIAC.
We can summarize the amazing progress of computer
hardware technology in a few trend charts. Figure 29
shows the change in size. From 1955 through 1965, the
size of a central processing unit with its storage has
decreased by a factor of about ten. From 1965 through
1975, the impact of fully integrated circuits is expected
to produce a further reduction in size by a factor of
about 1000. For the two decades of 1955 through 1975,
Figures 29-32 are taken from P. Armer, "Computer
Aspects of Technological Change, Automation, and Economic
Progress," The RAND Corporation, November 1965.
there will have been a size reduction of 10,000 in the
art for building central processing units.
Figure 30 refers to the cosf: of computing power--
not the cost of a computer itself. In the first decade
of the computer's existence the cost of doing a million
operations decreased by a factor of about 300. By 1975
the cost will decrease by another factor of 300 to less
than one 200-thousandth of it3 1955 value.
Figure 31 shows how machine speed has changed. From
1955 through 1965 the internUl speed of the computer in-
creased by a factor of about 200. By 1975 it is expected
that the speed will increase by another factor of 200 or
so; so that by the mid-70s, ire can look forward to doing
computer operations at the rate of about a billion per
second.
Finally, let's look at the installed computing nower
in the United States (Fig. 32). In 1955 all installed
computers working together coild do about 500,000 addi-
tions per second. By 1965 the machine population could
do about 200 million additions per second, an increase
in capability of about 400 fold. If the same growth rate
continues through 1975, capabiLity will increase by
another factor of about 400. 'A somewhat less optimistic
projection for the coming decace still sets the expected
growth at 20 fold or so.) The number of computers in
the Air Force alone has increased from 350 in 1963 to
nearly 700 in 1965.
What does this all add up to? Beginning in the early
1970s, computers will be small, powerful, plentiful, and
inexpensive. Computing power will be available to who needs it, or wants it, or can use it. He may have it
by means of a personal console connected to some large
central computing facility, or he may own a small personal
machine.
We all know, however, that the computer is more than
a piece of hardware; it has to be programmed. Historically
computer programming has been expensive and time consuming,
but we expect the future to be different. Cheap hardware
will enable us to consume vast amounts of computational
power to make a machine convenient and attractive to a
user. Furthermore, with the current or near future state
ot computing knowledge, we can frame languages including
appropriate symbols and syntax which are completely natural
to a novice user and to a user trained in any professional
specialty. We can design a tool for a given individual
from the ground up; a tool to match his normal training
and way of thinking.
For example, most automobile drivers don't bother to
understand the details of the engine under the hood, or
even how the automatic transmission works--such knowledge
wouldn't help them to drive better. Similarly, the com-
puter user of the future will not be able to perceive the
inner details of the machine, nor would it help him if he
could. Communication with a machine is becoming that easy.
The new class of users will no more have to be programmers
of the traditional kind than an auto driver has to be a
mechanic to handle his car.
Such astounding and explosive changes in technology
and the growing ease of communication with a computer are
almost certain to have a staggering impact. What are of the possible consequences of this expected tremendous
growth in information processing?
Extrapolating into the future can sound like science
fiction, but I hope that my predictions now have a cred-
ible basis. My visual summary of the change in computer
hardware could be paralleled by a corresponding summary
of research in application of computers to new and varied
tasks. In particular, we should note the growing capa-
bility of the computer with graphical and pictorial
information.
Various projections have been made of computer
achievements in the 1970s. Let us note one such set of
expectations.
"o Computers will be readily available as a public-
domain service (but not necessarily as a regulated
monopoly).
"o Information per se will be inexpensive and readily
available.
o Large and varied data banks will exist and be
accessible to the public.
"o Computers will be used extensively in management
science and decision-making.
"o Computers will be economically feasible for firms
and activities of all sizes.
"o Computers will process language and recognize
voices.
"o Computers will be used extensively at all levels
of government.
"o Computers will increase the pace of technological
development.
P. Armer, loc. cit.
Let us assume that these expectations have definitely
materialized by the mid-1970s, and consider a period
further distant. Although some of the suggestions that
I will make might arise late in the 1970s, I want to pick
a time more remote; let us consider the mid-1980s, perhaps
the Orwellian year of 1984.
I am not making predictions, but rather I am suggest-
ing things which in my opinion computer technology can
make possible. Whether these suggestions actually materi-
alize into fact will depend, of course, on many other
things--such as political, social, and economic forces;
rate of capital investment, rate of production, etc. The
computer will contribute to our future in two ways: a)
it will make some things possible because of its capa-
bility as a research tool; b) it will be an integral and
operational part of other systems which without the com-
puter would have been impossible. Behind everything I
suggest stand the assumptions which I hope you now readily
accept: computers will be inexpensive, small, powerful,
and plentiful; modeling will be a powerful technique; the
computer as a tool will be a user-oriented device.
I must at least touch on the issue of general social
impact first. The computer is helping technology to move
so swiftly that professional skills rapidly become obsolete
and large blocks of employment openings disappear from one
industry to reappear in another. Frequent retraining and
re-education is likely to become the normal way of life.
Change, not status quo, will be everyone's lot--in civilian
as well as in military careers.
As we shall see, the computer can assist pecple in
adjusting to this new state of affairs. Certainly the
introduction of computers into industry will improve pro-
ductivity, but will dislocate jobs. It is easy to believe
that men will be without work. However, I am partial to
a more optimistic view. The computing industry per se is
creating n'w jobs, as well as moving old ones to new
places. The wants of society are not now being met, and
even with the increased output from the economy that
better productivity can bring, plenty of jobs will proba-
bly be available, f though they may not be in the "right"
places. We must face and solve the retraining and re-
education issue, a problem which will not be limited to
the labor force. Each professional specialist or admin-
istrative official faces the problem of continuous re-
education and adaptation as well.
Ey the 1980s, the use of the computer as a teaching
machine will have increased the entire pace of education.
It is an ideal device for exercising, instructing, and
examining students on a large amount of material. Sophis-
ticated training films produced by computers will give
students deep, rapid insight into physical and scientific
problems.
Students will have computational power available to
them wherever they may need it. The public school systems
and universities will have to provide each student with
For example, the Bell Telephone Laboratories has
produced a twenty-minute film--Force. Mass and Motion--
by computer, which gives great insight into gravitational
laws.

computational support, either as a personal conscle con-
nected to a centralized large computer or as a smail com-
puter of his own, perhaps the size of a small cereal box.
Parenthetically, in most homes I foresee a personal console
(or small machine) for use by the entire family--it will
be in effect another appliance.
With the increased pace in education, students are
likely to complete their formal schooling much sooner;
alternatively, they will acquire more training in the same
time. Consequently, everyone will have more productive
capability, which in turn will speed up technology and
science. There may be an even more overwhelming effect:
if it is true that youth is an important aspect of sci-
entific creativity, the increased pace of education will
result in more young productive years.
The rapid changes in technology caused by the computer
tend to make technical skills obsolete, but the computer
will help to alleviate the very situation it is causing by
making possible not only rapid re-education and refurbish-
ment of technical knowledge, but also swift acquisition
of new skills.
For example, in the Air Force, manpower skills will
be developed more rapidly; an officer should achieve a
much higher level of technical competence much earlier
in his career, and be able to refresh his abilities
readily. By means of models we can exercise him in a
variety of situations, and so sharpcn his judgmental
ability much sooner. Similarly, the computer might be
exploited as a training device in military aid programs
to improve the technological skill level in underdeveloped
countries.
By the 1980s, I expect the tempo of scientific dis-
covery to inctease. I previously granted that the pace
of technology will increase; I now refer to the acquisi-
tion of new scientific information. Computers will allow
data to be displayed in ways not otherwise possible; they
will let the scientist observe situations which he could
not otherwise examine.
The computer will be the most important tool ever
available for the conduct of research. Sophisticated
mathematical and simulation :echniques will be widely
exploited for many situations and systems which scientists
would not otherwise be able to study; e.g., solving mathe-
matical models of the atmosphere and displaying the re-
sults as motion pictures in order .o study weather patterns
in a vastly speeded-up time scale. Such a capability
would be a valuable adjunct to Air Force operations.
In the future, experimentation by computer will be
less expensive than other methods, permitting scientific
investigations otherwise impossible. In fact, I would
not be surprised to find laboratories tending to go out
of style. The nan-and-his-computer may well rank ahead
of the man-and-his-laboratory as the source of new sci-
entific knowledge. I foresee a future in which the
principal ifurcLion of the laboratory is to validate com-
puter models. The quickening pace of scientific investi-
gation will mean new capabilities, new materials, new
technologies, and new tools for the Air Force.
Because vast quantities of information will be trans-
ported from place to place, there will be an enormous
demand for communications services. The networks, however, will have become totally digital; no
longer will we transmit analog voice or video signals.
Cheap digital components will allow the use of sophis-
ticated techniques for removing redundancy from signals
before digitizing and transmitting them. When error-free
transmissions are important, controlled redundancy will
be reinserted into the messages. It is reasonable to
expect that all transmissions will be encoded and hence
private. I can foresee that all the voice, video,
facsimile, and data transmissions needed by a place of
business-or a residence--will be handled digitally cier
a broadband cable.
We can expect to know how to modify and to control
weather, although we may not have the large energy sources
required for a working system of weather control. The
computer will have made this possible, because of the
general increase of knowledge which it will have supported
and the understanding of atmospheric physics achieved
through modeling and simulation techniques. If we achieve
a working capability for weather control, the consequences
will be measured in the billions of dollars. Universal
good climate will eliminate bad real estate. The Nevada-
California deserts could become the breadbasket of the
United States. Crop failures will vanish because sunshine
will occur in the right amounts at the right tikaes.
"Weather Central" will arrange the storms and probably
even publish an annual schedule of rains and snows. The
weather schedule will be as well known as the dates of
holidays; one may well buy it at the Government Printing
Office for a nominal sum. The ability to control andmodify local weather is obviously important to Air Force
operations.
I foresee that our entire engineering design process
will be computer-based. By means of graphical terminals,
the engineer will be able to converse directly with his
machine. He will sketch only the roughest outlines of a
design and let the machine provide all details. Before
the device is built; the machine can exhaustively test
it for him by calculating and simulating its performance.
Engineering drawings will be unnecessary; the blueprint
will be replaced by (say) a roll of magnetic tape con-
taining all the details for automatic fabrication tools.
The medical and biological sciences will probably
use the computer more extensively than the physical
sciences. Hospitals will have become completely computer-
supported in medical as well as business and administrative
aspects. Because they are cheaper and more accurate, com-
puter models will handle most laboratory work. A computer
will be connected to surgical patients in order to monitor
body processes and to warn of dangerous incipient condi-
tions. Consequently, much more daring surgical and medical
procedures will be used. For example, a computer analyzing
electrical activity of the brain can regulate the position-
ing of microelements for brain surgery. Similarly, com-
puters will be used to control prosthetic devices. Post-
operative or intensive-care patients will also be monitored
by a computer.
Even more dramatically, the increased pace of sci-
entific discovery owing to the use of the computer in
research may contribute to the extension of the human lifespan. Through computer analysis and experimentation
with enzyme structures and with the basic building blocks
and codes of proteins, we may learn how to




synthesize specific enzymes, which will make possible the growth of
new body parts or organs, or the revitalization of an
entire body by causing it to resume growth. In fact, if
we learn enough about enzymes and body building blocks,
we ought to be able to grow an organism to specification.
The implications of a significant increase in human
life span are overwhelming. The population explosion
problem would be enormously more important. Industries
based on mortality tables might not survive. Other
organizations would have to find new advancement oppor-
tunities for young people, since present employees would
have a longer productive life. All such changes would
certainly affect religious and social patterns.
The possibilities in medicine and biology have con-
sequences for the Air Force. The computerization of
hospital functions also applies to field medical operations--
keeping track of patients, keeping records on medication,
monitoring patients for dangerous conditions, etc. Per-
haps medical treatment might even be computer based or
even computer controlled. Moreover, body behavior in un-
usual situations or in novel vehicles can be studied
through models.
It has been forecast that by 1975 the files of in-
formation which society needs to govern itself will be
computer based. I refer to information on real property,
credit status, legal status, financial status, licenses,
and so on. In view of population increases, will have to depend on the computer. Hopefully, this will
make the government more efficient but less expensive.
Because of the computer's ability to accept and correlate
information from the many large data banks and files which
will exist, there will be possible in intensive social and
personal surveillance by any agency that elects to do it.
The depth of surveillance may well surpass simple invasion
of privacy.
A criminal activity data bank is one such information
file that will undoubtedly exist. Even today, the computer
is beginning to contribute to the control of crime by
rapidly retrieving information from files. The future role
of the computer is bound to grow as its data banks expand
and as it becomes better at making inferences from frag-
mentary factual information. A machine will be able to
provide much more incisive indictments about crimes and
criminals. Moreover, for each case it will undoubtedly
recommend a treatment for curing rather than impounding
the criminal.
Remember that in the future computing power will be
readily available to everyone, either is a small personal
machine or as a personal console. The computer will
certainly be useful to society in combating crime. But
might it not also help the criminal plan his crime or
the large criminal organization manage its affairs?
Certain implications of future computer technology
will be peculiar to the Air Force. Without a doubt, any
officer or enlisted man who can use it can have computing
power. If necessary, we will be able to build a machine
the size of a cigarette package. As in civilian lifeall communications will have become digital, making com-
puter technology an integral part of communication equip-
ment. Everyone can have private communications by using
a small personal computer for scrambling. The individual
may be permitted to use communication satellites; his
personal computer will assemble the message, encode it,
handle the error-control problem, provide secrecy, etc.
We can provide voice, video, facsimile, and data trans-
mission to individuals as needed--in each case, with
privacy. Computers with wideband data links can provide
graphical communications, allowing widely separated Air
Force elements to discuss plans, documents, maps, etc.,
as though they were in private conference.
We are currently hearing discussions of control of
general war. It seems to me that no enemy will believe
that the United States can control a general war unless
he sees, among other things, a credible and tight command
and control system that is well-trained and regularly
exercised. In fact, in the spectrum of possible deterrent
mechanisms, I believe that a computer-based command and
control system could be a credible deterrent--just as
weapons are.
On this next suggestion I am less certain of the
time. I suspect that it's ten or twenty years further
away than anything I have touched on so far--close to the
end of the century; it is perhaps the most dramatic effect
that computer technology could have on the Air Force:
Computer technology may make obsolete traditional warfare--
warfare in which destructive energy is delivered on an
enemy by weapons. Let me suggest a plausible argument support this conjecture. Twenty or thirty years from
now we'll have all the computational power that we can
possibly use, so the computer per se will. be no problem.
By that time we should also be extraordinarily proficient
in modeling. In particular, we should be able to model
in detail any segment of the world's political or economic
situation--in particular, the status of what we would now
call the enemy. Presumably he will be able to do like-
wise against us. I foresee the possibility that warfare
will be a series of political moves and countermoves backed
up not by exchange of military attacks as in traditional
warfare, but by the manipulation of an enemy's external
environment (e.g., the economy of other parts of the
world) or even his internal environment (e.g., his weather).
One might even argue that warfare could become an exchange
between your model of the enemy and his model of you.
If something such as I have suggested were to come
true or even partly true, the role of the Air Force would
certainly change. What might be its role in such a world?
Perhaps the Air Force would become a professional arm of
the government, as the Army Corps of Engineers presently
is the professional construction arm of the government.
Perhaps the Air Force would become the professional ex-
plorer or adventurer for the government--perhaps the
explorer of space. Perhaps the Air Force will continue
to fight a traditional warfare, but in outer space or on
another planet. Perhaps the Air Force will be the exe-
cutive agent to deliver energy to the atmosphere to achieve
weather control. Whatever, the role of the Air Force
will change.

I've suggested several possible impacts of the com-
puter. I've not detailed any of them, but I hope that
I have shown each to be credible. Many of my suggestions
have important sociological implications and challenges;
I have acknowledged but not explored the interaction be-
tween technological possibility and political, economic,
and social factors. My case for believing that these
events can come about rests on these points:
"o Computer hardware will continue to increase in
speed but reduce in cost and size.
"o The computer can process all kinds of symbolic
information.
"o The technique of modeling allows the computer
to simulate and experiment with all kinds of
systems and situations.
"o We are solving the programming problem.
Dr. William Baker of the Bell Telephone Laboratories
has characterized computer technology as a question; I
know of no answer to it:
What other technology is there in which the
United States has such a commanding lead,
which will have as much effect on how we
design and do things, which will be as per-
vasive, and which will both attract and
appeal so strongly to the young mind?
This brings us to the end of our tour through Com-
puterland, I hope that I have enlarged your image of
the computer or, as we ought to call it, the information
processor. It is the most powerful and most flexible
tool ever available to man and to society. It is not a
replacement for man in any large and encompassing sense;
it will displace him in many jobs, but it also will him many riew opportunities. The computer will touch
men everywhere and in every way, almost on a minute-to-
minute basis. Every man will communicate through a
computer whatever he does. It will change and reshape
his life, modify hiq career, and force him to accept a
life of continuous change.

Tuesday, 29 December 2015

Power source

1.0 Introduction
Recent advances in high energy plasma physics show that nuclear fusion - the energy source of the sun
and the stars [1] - may provide the corner-stone of a future sustainable energy system. Such power plants
would be safe and environmentally friendly. In particular, one of the main problems of fission reactors,
namely that of a possible uncontrollable nuclear reaction is banished; also the problem of radiotoxic waste
is reduced by many orders of magnitude. Fusion reactors would have almost limitless supplies of fuel and
could be sited anywhere in the world. Fusion is, however, still in the development stage and it is not
expected that commercial power plants will start operation before the middle of this century.
The aim of the present paper is to present the current status of fusion research and to describe the steps
ahead that will lead to power generation. First, we introduce the principle of nuclear fusion and explain
how in a future power plant based on this principle the extremely hot ionised hydrogen gas (“plasma”) is
contained in a magnetic field cage (“magnetic confinement”). We then go on to describe the advances
made in fusion research in the last few years and note that the so-called break-even point has almost
been reached at the Joint European research facility JET in Culham, UK. Subsequently, the factors
affecting the design of a future fusion power plant, its safety and environmental features as well as the
possible costs of fusion power, are discussed. Finally, we consider the role which fusion might play in
various energy scenarios in the second half of the century.
2.0 Principles of fusion
2.1 Mass turns into energy
According to our understanding of modern physics, matter is made of atoms [2]. Their constituents are
positively charged nuclei surrounded by negatively charged electrons. Two light nuclei, when they
approach each other, undergo, with a certain probability depending on their separation, a fusion reaction.
Figure I depicts the reaction of heavy hydrogen and super-heavy hydrogen, deuterium and tritium (known
as isotopes of hydrogen), to give helium (an α particle) and a sub-atomic particle, the neutron. Energy is
gained in the process, which is carried away as kinetic energy by the helium atom and the neutron. At the
same time, mass is lost: the combined mass of the products is lower than that of the reactants. Compared
with a conventional (carbon) combustion process the energy gain is greater by six orders of magnitude! In
principle numerous nuclei could be used as fuel in a fusion power plant. The advantage of deuterium and
tritium is their high reaction probability.
The aim of fusion research is to design schemes in which light nuclei approach each other frequently
down to such small separations that there is a high chance of numerous reactions taking place. Under
normal conditions nuclei are separated at least by the so-called atomic radius which reflects the presence
of the surrounding electron cloud. Under these conditions fusion does not take place. If the atoms are
heated, the motion of the electrons and the nuclei will increase until the electrons have separated. A hot
gas, where nuclei and electrons are no longer bound together, is called a plasma.


Figure I: Schematic of the fusion reaction in which deuterium and tritium form a helium atom and a
neutron. Mass is lost in the reaction and energy gained.

Even in a plasma, however, the nuclei do not come close enough to react because of mutually repulsive
forces. By heating the plasma to an even higher temperature – one speaks of a very hot plasma - the ions
acquire an even higher velocity, or kinetic energy, and can then overcome the repulsive force. As an
analogy, we can think of a fast ball rolling up a hill against the gravitational force. Clearly, the number of
fusion reactions that take place will depend on the plasma temperature and plasma density.
The production of the plasma and its subsequent heating require of course energy. A successful fusion
power plant requires that the power produced by the fusion reaction exceed the power required to produce
and heat the plasma. The ratio of the power generated to that consumed (the fusion power amplification
factor) is called the Q value. Initially, the plasma will be heated by various external sources, e.g.
microwaves. With increasing temperature, however, the number of fusion reactions also increases and the
fusion reaction itself heats the plasma due to the production of the energetic helium atoms (actually ions,
or α particles). The kinetic energy of the helium nuclei exceeds the average kinetic energy of the nuclei of
the fuel (deuterium and tritium) by orders of magnitudes. The energy is distributed to the fuel nuclei via
collisions, as in a game of billiards. In fact, a point can be reached - termed ignition - when external
heating is no longer necessary and the value of Q goes to infinity. In practice, however, power plant
operation would probably correspond to a Q value of 20-40.
The state of a very hot plasma and its nearness to the ignition condition can be characterised by the
product of temperature, density and the so-called energy confinement time. The latter value describes the
ability of the plasma to maintain its high temperature; in other words, it is a measure for the degree of
insulation of the plasma. Ignition can only be achieved if this “fusion triple product” exceeds a certain
value.
2.2 Magnetic confinement fusion
The temperatures necessary to ignite a plasma are between 100-200 Mio °C. Obviously no solid material
is able to confine a medium with such a high temperature. This dilemma is solved by the fact that in the
plasma, all the particles carry an electrical charge and can thus be confined by a magnetic field. (The
charged particles gyrate around the magnetic field lines.) It transpires that a doughnut-shaped
configuration of the magnetic field “cage” is appropriate for this purpose, although the story is actually a
little more complicated: the magnetic field lines not only have to be doughnut-shaped, they also need to
have a helical twist. This scheme is referred to as magnetic confinement.
Different proposals were made to produce helically-wound doughnut-shaped magnetic field cages. The
most successful approach has been the tokamak, first realised in Russia [3]. A sketch is shown in figure II.
The magnetic field is the sum of the toroidal magnetic field produced by the coils shown and the magnetic
field produced by a current in the plasma. The problem associated with the tokamak concept is driving the
current in the plasma. The most important concept applied today is to place another magnetic coil in the
centre of the tokamak (see figure II: solenoid magnet) and to ramp the current in this coil up or down. will produce a varying magnetic field in the coil which in turn induces a voltage in the plasma (the principle
of induction). This voltage can only be sustained for a limited time - one or two hours at the very most.


Figure II: The tokamak has so far been the most successful magnetic confinement scheme.

 The magnetic
field cage - necessary to confine the charged particles - is produced by the superposition of a toroidal
magnetic field and a poloidal magnetic field produced by a current in the plasma.
Base load electricity plants need of course to produce power under steady state conditions. Many current
R&D activities are directed towards finding alternative ways of driving the current in the plasma (via
microwave heating or particle beam injection) or to concentrate on the stellarator, successfully pursued in
several countries, in particular Germany and Japan, in which no current is necessary.
2.3 Alternative path to fusion
Two alternatives to magnetic confinement are discussed briefly here: inertial confinement and muonic
fusion.
In inertial confinement fusion a small pellet of deuterium and tritium fuel is compressed by so-called
momentum conservation to extremely high density and temperature. (Densities of twenty times the density
of lead and temperatures of 100 Mio ° C are envisaged.) The fuel pellet is encapsulated by an layer of
another material and subject to extremely intense beams of laser radiation or high energy charged
particles. The outer layer heats up and evaporates. The evaporation products move outwards, but the rest
of the pellet is compressed inwards, due to momentum conservation. Inertial confinement is mainly
investigated in the US and France and to a lesser extent in Japan, Britain and other European countries.
Since such experiments can also be used to study the physics of nuclear weapon explosions, much of this
research in the US is financed from the defence budget [4]. Inertial fusion is considerably less developed
than magnetic confinement fusion with respect to the realisation of a power plant.
Muonic fusion, which seemed very promising in the beginning, is now only investigated in a few
laboratories. The idea is to produce muons, which are the heavy sisters of the electron. The muon is
injected into a deuterium-tritium gas mixture. There is a finite probability that the muon will be captured a tritium or deuterium atom and form a deuterium-tritium molecule. Since the muon is very heavy, the
dimensions of such a molecule are much smaller than those of a normal molecule with bound electrons.
Therefore the nuclei will be much closer to each other and there is a greater likelihood that they will
undergo a fusion reaction. The problem of this scheme is that the production of muons costs too much
energy and that the muon will only “catalyse” about two hundred fusion reactions [5].
2.4 The possible design of a fusion power plant
The various features such as steam generator, turbine and current generator will be the same as in
conventional nuclear or fossil-fuelled power plants. A flow chart of the energy and material flows in a
fusion plant are depicted in figure III. The fuel - deuterium and tritium - is injected into the plasma in the
form of a frozen pellet so that it will penetrate deeply into the centre. The neutrons leave the plasma and
are stopped in the so-called blankets which are modules surrounding the plasma. The neutrons deposit all
their kinetic energy as heat in the blanket. The blankets also contain lithium in order to breed fresh
supplies of tritium via a nuclear reaction (see 4.2). The "ash" of the fusion reaction – helium – is removed
via the divertor. This is the section of the containing vessel where the particles leaving the plasma hit the
outer wall. The outer magnetic field lines of the tokamak are especially shaped so that they intersect the
wall at special places, namely the divertor plates. Only a small fraction of the fuel is "burnt" so that
deuterium and tritium are also found in the “exhaust” and can be re-cycled. The tritium produced in the
blankets is extracted with a flushing gas - most likely helium - and delivered to the fuel cycle.

Figure III: Flow chart for a future fusion reactor: fuel (brown), electrical power (yellow), heat (red), neutron
(grey), mechanical power (black) and cooled helium (blue).



The heat produced in the blanket and the divertor is transported via water or helium to the steam
generator and used to produce electricity to feed to the grid. A small fraction is used to supply electricity to
the various components in the plant itself. Electrical power is required mainly for the cryo-system which
produces low temperature helium for the super-conducting magnets, the current in the magnets, the
current drive and the plasma heating systems.
The reactor core is arranged in different layers like an onion. The inner region is the plasma, surrounded
by first wall and blanket. All this is contained in the vacuum vessel. Outside the vacuum vessel are the
coils for the magnetic field. Since the magnets operate at very low temperatures (superconductors), the
whole core is inside a cryostat (see figure VII).
3.0 Status of Fusion Research
3.1 Plasma physics: “break-even” at JET
Progress on the path to ignition in magnetic confinement fusion research is best characterised by the
improvement in the triple product. As described above, the triple product is the product of plasma
temperature, plasma density and energy confinement time. Figure IV depicts the increase of the triple
product by five orders of magnitude in the last three decades. Only a factor 5-6 remains to be overcome
before ignition is reached. The first promising results were achieved in the Russian tokamak T3, following
which tokamaks were constructed in many countries at the beginning of the seventies. Construction of the
Joint European Torus (JET) started the end of the seventies. It went into operation in 1983 and remains
the largest fusion device in the world.
The major physics issues in the world-wide fusion program centres are: improvement of the energy
confinement time, plasma stability, particle and power exhaust, and α particle (helium nuclei) heating.
The energy confinement time depends strongly on the plasma dimensions. Larger machines will have
longer confinement times. The confinement time is - as mentioned above - a measure of the heat
insulation of the plasma core and it is clear that a larger plasma insulates the core better than a smaller
plasma. Improvements have also been made by establishing new plasma modes, i. e. stable states
corresponding to particular sets of the various parameters that characterise the plasma. Increased
understanding of the underlying physics and many experimental studies have led to the discovery of new
plasma modes, such as the so-called H-mode in 1982 [6].



Figure IV: The development of the triple product of plasma temperature, plasma density and energy
confinement time in the last three decades. The temperature 1 keV is equivalent to 11 Mio. K.
Establishing the H-mode improves the energy confinement time by a factor of two. Attention now turns
increasingly towards advanced plasma scenarios which are characterised by an internal transport barrier
[7]. Plasma stability is a matter of particular importance for the economics of fusion. The figure of merit is
the ratio of the plasma pressure to the pressure of the magnetic field. This ratio is very small in current
machines. If this value is exceeded, the plasma becomes unstable and collapses in a so-called disruption.
Limits also exist for the plasma density, although these are generally soft and considerable improvements
may be expected in future. Major improvements are expected from active measures to shape the plasma
by special control mechanisms [8].
The particle and power exhaust seemed to a major problem for several years. As mentioned above, a
viable solution for the power exhaust is the divertor concept [9]: with the help of additional magnets the
stream of plasma particles leaving the core is directed to the divertor plates. These plates are made from
special material, either carbon fibre composites (CFC) or tungsten. Special cooling schemes have been
designed for the plates which have to withstand heat loads of the order of 10 MW/m2
.
Experimentally it has also been demonstrated that the residence time of helium in the plasma poses no
severe problem [10] and the helium "ash" can be transported efficiently to the divertor to be removed from
the system.


Figure V: Fusion energy production in the Joint European Torus (JET)[11].


At JET, experiments with deuterium and tritium have led to considerable power production [11]. 16.1 MW
fusion power were produced for about a second and about 4 MW for a few seconds (see figure V). The
fusion reaction produced for a short period nearly as much energy (65%) as was delivered to the system
in the form of external heating, corresponding to Q = 0.65. While this is an outstanding result in itself, it
also demonstrated the principle of alpha-particle heating as described above.
3.2 Development of fusion technology
Fusion gives rise to complex technologies and still demands progress in various fields such as
superconducting magnets, high heat load materials, materials able to withstand high neutron flux, remote
handling devices and plasma heating techniques.
The next step in the international fusion programme, ITER (= International Thermonuclear Experimental
Reactor) will demonstrate the viability of fusion as an energy source. A special programme was therefore
launched in 1994 (Engineering Design Activity, or ITER-EDA) to assess the key technologies [13]. Seven
tasks were set up in world-wide collaboration to design, construct and test these components. They
encompass construction and testing of a solenoid magnet module, a toroidal field magnet, a divertor
cassette, a blanket module, a sector of the vacuum vessel, remote handling devices for the divertor
cassettes and the blanket module. With the exception of the toroidal magnet, where tests will start soon
(spring 2001) all the tasks have been successfully completed. In case of the solenoid magnet performance
exceeded expectations [12]. Remote handling proved to operate satisfactorily [13]. Divertor concepts were
developed that could withstand heat loads of more than 10 MW/m2
 and had lifetimes expected for regular
ITER operation [14].
Materials for fusion devices need to fulfil two objectives: (i) they should retain their mechanical properties
even after irradiation with intense neutron fluxes and (ii) neutron-induced activation should not lead to the
production of long-lived radioactive waste. A number of materials have been identified as candidates for
future fusion power plants [15]. Experimental data are unfortunately lacking, since no existing neutron
source is able to produce neutron fluxes of the intensity and spectrum expected in fusion plants [16].

Figure VI: Photograph of a the prototype sector of the vacuum vessel for ITER (Photo ITER).





4.0 Path to a Fusion Power Plant
The European fusion strategy has always been reactor-oriented. Via two major steps (ITER and
subsequently the demonstration reactor DEMO) the programme is intended to provide the scientific and
technological basis to build and operate economically viable fusion power plants by the middle of the 21st
century. The first step has three major parts: construction and operation of ITER, development of fusion
technologies including advanced materials and improvement of the magnetic confinement scheme.
ITER is a collaboration involving the European community, Japan and the Russian Federation. In the
ITER Conceptual Design Activity (CDA) and the original Engineering Design Activity (EDA) the US was
the fourth partner. The CDA phase began in April 1988 and was completed in December 1990. The EDA
phase lasted from 1994 to 1998. In the current extension of the ITER-EDA the design is being modified to
produce a lower cost, lower performance version. The so-called ITER-FEAT may not reach ignition but will
be characterised by Q value of at least 10. The design modifications do not change the major objectives of
ITER, namely the prove that fusion can deliver considerably more power than is required by the external
heating and that the complex technology can be mastered.
Figure VII: Sketch of the ITER-FEAT Experiment (photo ITER).




Further improvements of the magnetic confinement scheme are necessary. The pulsed mode of the
conventional tokamak is not feasible for a power plant. Two lines of improvements are followed. The first
is called the "advanced tokamak" in which – amongst others - techniques are developed to replace the
inductive current drive. Many existing machines have already investigated such scenarios. The second
approach is to (substantially) modify the magnetic field cage so that only external magnetic fields are
required and an induced plasma current becomes unnecessary, as in the stellarator. Two large stellarator
projects are now being pursued: the LHD stellarator in Japan went into operation in 1998; WENDELSTEIN 7-X stellarator in Germany is expected to start operation in 2006. A smaller stellarator is
in operation in Spain.
The development of fusion materials requires the construction of an intense neutron source. A world-wide
collaboration under the auspices of the International Energy Agency (IEA) in Paris has been launched to
design the International Fusion Material Irradiation Facility (IFMIF). The conceptual design report was
produced at the end of 1996.
All these activities, i.e. ITER, advanced concepts and technological development will form the basis for
DEMO, the detailed design of which can be started after ITER has operated for about five years.
5.0 Characterisation of fusion as power source
5.1 Fusion plant models
A number of detailed system studies have been performed in the last thirty years in order to study the
possible design of future fusion plants [17]. On the basis of these studies it is possible to analyse
economic and environmental impact. The detailed design work on ITER adds useful complementary
material.
5.2 Fuel and material availability, energy requirements
One of the main motivations from the very beginning of fusion research has been that fusion can be
considered as a practically unlimited source of energy. The argument is based on the abundance of the
fusion fuels - lithium and deuterium - and the very small quantities required [18]. A 1 GWe fusion power
plant would require annually 110 kg deuterium and 380 kg lithium consumption.
Deuterium is a hydrogen isotope. In terrestrial hydrogen sources, such as sea water, deuterium makes up
one part in 6700. Given the above annual consumption rates it can be shown that fusion could continue to
supply energy for many millions of years. The oceans have a total mass of 1.4 * 1021 kg and therefore
contain 4.6 * 1016 kg of deuterium; moreover, there is already a mature technology for extracting the
deuterium. One of the main applications is the production of heavy water for heavy water-moderated
fission reactors. Existing plants can produce up to 250 t/a of heavy water which means a production of 50
t/a of deuterium. This would be enough to supply deuterium for 500 fusion plants each with 1 GWe
capacity. Obviously deuterium supply places no burden on the extensive use of fusion. What about
tritium? As we have mentioned above, tritium, also a hydrogen isotope, will be bred from lithium using the
high flux of fusion neutrons. Lithium is found in nature in two different isotopes 6
Li (7.4 %) and 7
Li (92.6
%). The two nuclear reactions
6
Li + n -> T + 4
He + 4.8 MeV
7
Li + n -> T + 4
He + n - 2.5 MeV
are relevant Since the second reaction is endothermic only neutrons with an energy higher than the
threshold can initiate this process. In most blanket concepts the reaction with 6
Li dominates, but in order to
reach a breeding ratio exceeding unity the 7
Li content might be essential.
Lithium can be found in:
- salt brines, in concentrations ranging from 0.015 % to 0.2 %
- minerals: spodumene, petalite, eucrypotite, amblygonite, lepidolite.; the concentration
varies between 0.6 % and 2.1 %.
- sea water; the concentration in sea water is 0.173 mg/l (Li+
).
The land-based reserves are given in table I according to two different sources.
Table I: Land reserves of Lithium.
Material Current Production Reserve [19] Reserve base [19] Reserve [20]
Lithium 15,000 t 3,400,000 t 9,400,000 t 1,106,000 t
While the annual consumption of lithium in a fusion plant is low, the lithium inventories in the blankets are
much larger [21, 23]. At least a couple of hundred tons of lithium are necessary to build a blanket. It is
expected that most of the lithium can be recovered and re-used, although radioactive impurities such as
tritium will complicate the handling. No detailed concept for recovering lithium has been developed so far.
The lithium supply is, however, a minor problem in the context of the construction of the whole plant:
lithium can be purchased today for around 17 Euro/kg and the blanket containing 146 t of lithium needs to
be replaced five times in the life of a fusion plant, which would amount to only 12 MEuro. Beside the land-
based resources there is a total amount of 2.24 * 1011 t lithium in sea water. Techniques to extract lithium
from sea water have already been investigated [25]. The associated energy consumption has also been
investigated [26]. The ultimate lithium resources in sea water are thus practically unlimited


Figure VIII: The picture shows the ratio of specific materials necessary to construct 1000 fusion plants (for
various plant models) normalised to current reserves of this material.
Besides fuel numerous other materials will be necessary in order to construct and operate a fusion power
plant [21, 22]. A first idea of the availability of these materials is sketched in figure VIII. The material
required to build 1000 1 GWe fusion power plants are divided by the known reserves of these materials.
Beryllium and tantalum seem to pose problems, but this is because these materials are hardly used today
and the proven reserves are probably much smaller than the actual resources.
The energy necessary to produce, transport and manufacture all the materials to build a fusion plant add
up, in a conservative model, to 3.15 TWh [27,28]. The energy pay back time, the time necessary for the
plant to deliver the same amount of energy necessary for its construction, is roughly half a year and thus
comparable with conventional power plants.
5.3 Cost of electricity
Basis for the cost estimates of fusion power is a plant of 1 GWe capacity based on the tokamak concept.
Conceptually the plant can be divided up between the fusion core - the heat source - and the consisting of turbines, generators, switchboards. The assumptions in the underlying physics and
technology seem well with reach based on current achievements. If progress in fusion technology is
faster, it might of course lead to considerably lower costs.
Most of the components of the fusion power core are unique for fusion. The basis for the cost estimates of
these components is (i) existing experience with operating fusion experiments, (ii) the experience with
designing ITER [30] and (iii) numerous system studies. The ITER experience is of particular importance
because it combines system studies and real manufacturing experience. As mentioned earlier, part of the
ITER activities to date have been the design, construction and testing of central components of the
experiment. The following discussion is based on [29,31,32].
Magnets make up 30 % of the investment costs of the fusion core for a prototype and another big item are
the buildings. The rest splits up into numerous items. Blanket and divertor make up 14 % and 3 %,
respectively, although these items will have to be replaced regularly. The divertor will be replaced every
second year, the blanket every fifth year. Two possible technological developments should be mentioned
which might lead in the long run to cost reductions. The pressure of the magnetic field has to balance the
pressure of the plasma. For specific physical reasons, however, the magnetic pressure needs to much
higher than the plasma pressure in current installations. Progress in plasma physics could reduce this
ratio in future and thus reduce the size and cost of the magnets. Also a
lower replacement frequency of blanket and divertor due to the development of advanced materials might
lead to a further reduction.
Cost of electricity (COE) is the sum of the capital costs for the fusion core (39 %) and the rest of the plant
(23 %), the costs for the replacement of divertor and blanket during operation (30 %), fuel, operation,
maintenance and decommissioning (8 %). An annual load factor of 75 %, an operating lifetime of 30 years
and an real interest rate (corrected for inflation) of 5 % are also assumed. The investment costs for DEMO
are expected to be roughly 10000 Euro/kW (1995) [29] giving an expected COE of 165 mEuro/kWh.
Collective construction and operation experience are expected to lead to considerable cost reduction due
to accumulated learning processes [33]. Learning curves describe the correlation between the cost
reductions and the cumulated installed capacity. The slope of the curve - the so-called progress ratio -
gives the cost reduction for a doubling of the capacity. A progress ratio of 0.8 is assumed for the novel
components in the fusion core. This ratio is well within the values generally experienced in industry;
possible physics progress is also included. Figure IX shows the expected cost development with time.





Figure IX: Learning curves for fusion power plants [29].
Further cost reductions can be achieved by scaling up the plant size or by siting two or more plants at the
same site. When fusion is a mature and proven technology in 2100, costs are expected to be in the range
described in table II.
Table II: Cost of electricity for different fusion plant models.
Plant capacity
[GWe]
Number of
plants at the
site
Study Cost of electricity
[mEuro/kWh]
1 1 Knight [32] 96
1 1 Knight [32] 71
1 1 Gilli [29] 87
1,5 2 Gilli [29] 67
Studies performed in the US and in Japan arrive at even lower investment and electricity costs[34].
The underlying assumptions do not violate any physical principles but assume tremendous progress in
technology.
5.4 Environmental and safety characteristics (external costs)
5.4.1 Effluents in normal operation
A fusion power plant is a nuclear device with large inventories of radioactive materials. The safe
confinement of these inventories and the minimisation of releases during normal operation, possible
accidents, decommissioning and storage of waste are major objectives in the fusion power plant design.
Besides tritium the other source of the radioactivity in the plant is the intense flux of fusion neutrons
penetrating into the material surrounding the plasma and causing “activation”.
Three confinement barriers are foreseen: vacuum vessel, cryostat and outer building. Small fractions of
the radioactive materials are released during normal operation. The amounts depend strongly on design
characteristics such as cooling medium, choice of structural materials and blanket design. The releases
during normal operation for two different plant models are summarised in table III. A detailed analysis is
presented in [35].
Table III: Doses to the public due to normal operation effluents for two different fusion plant models.
Model 1 Model 2
Doses to the most exposed
public from gaseous effluents
[µSv/y]
0.28 0.003
Doses to the most exposed
public from liquid effluents
[µSv/y]
0.95 0.11
The expected doses to the public stay well below internationally recommended limits [36].
5.4.2 Possible accidents
Detailed accident analyses have been performed within the framework of system studies [35] and in even
more detail for ITER [37]. Although ITER is not in all aspects comparable to a later power reactor many of
the characteristics are similar. Different methods (bottom-up and top-down) are applied to guarantee a
complete list of the accident sequences. Reactivity excursions are for several reasons not possible in a
fusion power plant. Therefore, the most severe accidents are all related to failures of the cooling system.
These failures can be caused either by power failures or ruptures in cooling pipes or both. As an example
of one of the most severe accident sequences, a total loss of coolant accident, should be described.
Shortly after the accident the fusion reaction will come to a halt. This happens because the walls
surrounding the plasma are no longer cooled and their temperature increases. Impurities, evaporated from
the hot walls, enter the plasma. The larger impurity content in the plasma disturbs its energy balance and
more energy is radiated, thus cooling down the plasma. Fusion reactions are extinguished. With no more
fusion reactions, only the decay heat of the activation products in the structural materials and the blanket
produce heat. Detailed calculations show that the heat produced will be dissipated by heat radiation to the
inner walls of the cryostat. Temperatures in the structural materials will stay well below the melting
temperature and keep the confinement barriers intact. During such an accident sequence not more than 1
PBq of tritium would be released. Doses for the population would stay in the range of 1 mSv [35,38].
As a worst case scenario it was assumed that the complete vulnerable tritium inventory (roughly 1 kg) of
the fusion plant is released at ground level. The initiator of such an accident could only be very energetic
outside events such as an aeroplane crash on the plant. Even if the worst weather conditions are
assumed, only a very small area, most likely within the perimeter of the site, would have to be
evacuated[38].
5.4.3 Waste
All the radioactive material produced in a fusion plant is neutron-induced. A detailed analysis of the
amount and composition of the fusion power waste was performed in [35,38]. Time evolution of the
radiotoxicity of the waste is shown in figure X. The plant model assumed is based on available materials.
The picture shows a rapid decrease in radiotoxicity once the plant is shut down. The time evolution of the
fusion waste is compared with the time evolution of the waste from a PWR fission plant and with the
radiotoxicity of ash in a coal-fired power plant. The radiotoxicity of the waste of fission plants hardly
changes on the time scale of a few hundred years and stays at a high level. Fusion approaches rapidly the
radiotoxicity of the coal ash. It is a fair conclusion to say that the radiotoxicity of fusion waste does not
place a major burden on future generations.

Figure X: Development of radiotoxicity for a fusion plant, a fission plant and the ash of a coal plant.
 It is
assumed that all the plants produce the same quantity of electricity. The volume of coal ash is of course 2-
3 orders of magnitude greater than that of fusion or fission waste.
The impact on the population is rather low. Doses below 60 µSv/y are expected in the case the fusion
waste stored in typical waste repositories like Konrad in Germany or SFR or SFL in Sweden. The value
represents a rather conservative estimate.
5.4.4 External costs
Comparisons between competing technologies on an economic basis is mainly based on cost arguments.
Comparison on the basis of environmental performance or safety issues is often more interesting. It is
tempting therefore to look for a scale which also covers these aspects. One promising approach in this
direction is the concept of "external costs" or “externalities” [39]. All the damage and problems not
contributing to the market price are reflected in the “external costs” which are normally borne by society as
a whole. Examples of externalities are the damages to public health, to agriculture or to the ecosystem.
A methodology for the assessment of the environmental externalities of the fusion fuel cycle has been
developed within the ExternE project [40]. The method used is a bottom up, site specific and marginal
approach, i.e. it considers extra effects due to a new activity at the site studied. Quantification of impacts
is achieved through damage functions or impact pathway analysis. The whole fuel and life cycle of the
plant is considered.
The hypothetical plant under investigation is sited at Lauffen in Germany on the river Neckar. Two
different fusion plant models are considered. Most characteristics of these models are taken from the
European fusion safety study SEAFP [35]. The first model utilises a vanadium alloy for the structural
materials and helium as coolant (Model 1). The second model has a water-cooled blanket and martensitic
steel as structural material (Model 2). The parts of the plant not included in the above-mentioned study are
taken from the ITER design and from data for a fission plant.

Figure XI: External costs of fusion [41].


The results (Figure XI) indicate that the external costs of fusion do not exceed those of renewable energy
sources. A major factor in the external costs of plant model 2 are the 14C isotopes released during normal
operation which enter the world-wide carbon cycle. Nevertheless, the individual doses related to theses
emissions are orders of magnitude below the natural background radiation. For all models a considerable
fraction of the external costs is due to material manufacturing, occupational accidents during construction
and decommissioning.
5.5 The possible role of fusion in a future energy system
5.5.1 The global dimension
What is the possible impact of fusion on future energy systems? What role could fusion play to mitigate
greenhouse gas emissions? First, a general answer can be given which reflects well-known patterns of
technological change. For a good review article on this question, see [42]. Technological change is
described by two phases, the first being that of invention. In case of fusion, invention would be the point in
time when the first commercial power plant goes into operation. The second phase, in which numerous
power plants would be constructed in many different places, is represented by the time of diffusion. It
usually follows very general patterns, which can be described by an S-shaped curve, starting with a
smooth increase in market share, followed by a robust growth and finally a smooth approach to a
saturation level.
The “market” share of different primary energy sources in the past 150 years has always developed
according to this pattern. In the nineteenth century wood was replaced by coal. In the first half of the 20th
century oil started to replace coal and now natural gas begins to replace oil. Extrapolation of the current
trend would mean that gas would become the most important primary energy carrier in the first half of the
21st century [43]. This would mean that fusion can only hold a considerable market share by the end of
this century since the invention phase is expected to happen around 2050. Therefore fusion can not play a
role as greenhouse gas mitigation technology before that time. Second, it means that even without further
incentives the primary energy carrier natural gas, which has a specific lower CO2 emission than coal and
oil and which can be converted at least to electricity with very high efficiencies (nearly 60 % today, roughly
70% in the foreseeable future), would in any case lead to a specific reduction of greenhouse emissions. In comparison with coal this combined advantage would produce roughly a factor of three
lower CO2 emissions per kiloWatthour delivered. If all coal-fired plants were to be replaced by very
efficient gas-fired plants the electricity demand could triple without increase in emissions. Third, the time
when the share of natural gas will pass its maximum roughly coincides with the “invention” (the
technological and economic proof of principle) of fusion.
Another very important point is of course the future development of energy usage and, in particular, the
electricity demand. Scenarios made by the International Institute of Applied System Analysis (IIASA) and
the World Energy Council (WEC) describe various possible paths into the future [44]. Of the scenarios
labelled A, B and C, A is a high growth scenario, B an average growth scenario and C an ecologically
driven scenario. Even in the C scenario electricity consumption will increase considerably even after 2050,
leaving enough space for fusion, even without replacing older technologies. It must be noted that, given
the long lead-time, alternative low-GHG electricity generating techniques might compete for the same
potential market as fusion. While predicting winners or losers is obviously a very long shot, continued R&D
is an absolute necessity for all of them.
5.5.2 Fusion in Western Europe
In the framework of socio-economic studies on fusion (SERF), which have been conducted by the
European Commission and the Fusion Associations, a study was carried out on the possible impact of
fusion on the future West-European energy market, on the assumption that fusion is commercially
available in the year 2050. The scenario horizon is based the complete 21st century. The scenarios were
performed with the programme package MARKAL [45]. Details of the analysis can be found in [29].
Two different scenarios were explored which differ in the discount rates, level of energy demand,
availability of fossil fuels and energy price projections. The first scenario is called Market Drive (MD):
interest rates on power generation investments are 8%, interest rates on end-use investments are higher.
15 % of the world resources of fossil fuels are available to Western Europe and a rapid increase in the oil
price is expected. The second scenario is called Rational Perspective (RP): discount rates are 5 % across
the whole energy sector, but only 10,5 % of the world fossil fuel resources are available to Western-
Europe. The oil price increases more slowly. Energy demand is higher in scenario Market Drive. Both
scenarios assume that the capacity of nuclear fission never exceeds the current level. Fission is expected
to phase out at 2100.


Figure XII: The possible role of fusion in 2100 in the European electricity market [29].

The demand for energy increases in the two scenarios. In Market Drive it more than doubles in relation to
the 1990 value and in Rationale Perspective it increases by more than 50 %. Steady increases in
efficiency keep the overall primary energy demand roughly constant over the whole scenario horizon. The
demand for electrical energy increases in both scenarios roughly by a factor two.
The development of energy supply and conversion technologies, especially further progress in economic
performance and efficiencies, is based upon detailed assessments of the literature and on the studies by
Fusion Associations, and have been, where appropriate, guided by learning curves. The increase in
efficiency or the decrease in costs are time-dependent. A detailed description of the supply technologies
can be found in [46]. Another important point is the future development of fuel prices. An increase in the oil
price to $25/bbl (RP) or 29,5/bbl (MD) in 2100 is expected. The gas price is strongly tied to the oil price.
The price for hard coal is considerably flat over the whole period investigated. In both scenarios neither
new renewables nor fusion will win considerable market shares until the year 2100. Fossil fuels remain the
most important primary energy sources. Two shifts in the use of fossil fuels can be identified. The use of
gas increases considerably until the middle of the 21st century when the easily accessible natural gas
reserves are exhausted and its price has substantially increased. Coal will then win again a market share
and advance to the most important primary energy carrier at the end of the 21st century. The picture
change drastically, however, if future CO2 emissions are to be restricted in order to reduce the risk of
climate changes. These cases are constructed in such a way that the global emissions would lead in the
long term to a stabilisation of the CO2 concentration in the atmosphere. Different values for the
stabilisation concentration are assumed. Western Europe would be allowed to produce 10 % of these
global emissions. The time-dependent allowed emissions are constraints in the optimisation. If these
constraints are applied to the scenarios, the energy mix changes considerable. The share of the electricity
supply technologies in 2100 is shown in figure XII. Fusion and new renewables such as wind and solar
win considerable market shares. The conclusion can be summarised as follows: fusion can win shares in
the electricity market if (i) the further use of fission is limited and (ii) if greenhouse gas emissions are
constrained.
Similar studies have been performed in Japan [47] and the US [48].
6.0 Summary
Fusion research has made considerable progress in the last three decades. More than 16 MW fusion
power have been produced in the joint European experiment JET at a Q value (fusion power amplification
factor) of 0.65.
Technologies for the next step in the international fusion programme (ITER) have already been improved
by intense engineering R&D and the construction and test of prototypes. The ITER experiment still awaits
approval. Sites in France, Canada and Japan are, however, being discussed. ITER is intended to
demonstrate the proof of principle for magnetic confinement fusion as a future energy source.
Detailed investigations on the safety, environmental and socio-economic aspects of fusion have been
performed. Fusion - if fully developed in 2050 - will fit into a sustainable energy system and be able to
supply electricity for millennia to come at economically acceptable costs.