The Process of Discovery:
  NCSA Science Highlights-1991

  3    Director's statement

  4    Developing the environment

  7    Researching today's challenges

  8    Beyond the big bang: Stars and stellar evolution
       W. David Arnett and Bruce A. Fryxell

  10   At the forefront: Toward the operational prediction of
thunderstorms
       Kelvin K. Droegemeier

  12   Where there's smoke: Multipronged approach to fire modeling
       Kwang-tzu Yang

  14   Banding together: Coupling in high Tc superconductors
       Ronald E. Cohen

  16   Going with the flow: Vortex simulation of combustion dynamics
       Ahmed F. Ghoniem

  18   Quantum leaps: Electronic properties of clusters and solids
       Marvin L. Cohen and Steven G. Louie

  20   Cracking the protein folding code: Protein tertiary structure
recognition
       Peter G. Wolynes

  22   Adrift in an electric sea: Linear and circular polymer gel
electrophoresis
       Monica Olvera de la Cruz

  24   The cortical connection: Simulation of complex neural tissues
       Klaus J. Schulten

  26   Turning up the heat: Development of phased arrays for
hyperthermia
       Emad S. Ebbini

  28   Educating tomorrow's scientists


  Director's statement

       Predicting storms, sequencing human chromosomes, discovering
new materials--
  these and the other projects described in this report represent
only a small fraction of the
  exciting work being carried out by users of the National Center
for Supercomputing
  Applications. These are examples of emerging grand challenges--
fundamental problems in
  science or engineering, with potentially broad economic, social,
or scientific impact--that can be
  advanced by applying high-performance computing resources. These
problems, in their full
  complexity, require interdisciplinary team efforts on a national
scale, where collaborators are
  tied together by the National Research and Education Network.

       In order to answer these challenges, NCSA is transforming its
distributed
  heterogeneous computing environment into a single integrated
metacomputer. Our application
  teams are working with local and remote users to drive or probe
the technological implications
  of the metacomputer. This new infrastructure is being furthered by
the High Performance
  Computing and Communications Initiative--an unprecedented
commitment from the federal
  government to accelerate the utilization of new architectures.

       NCSA is committed to providing its diverse constituencies with
production and
  experi- mental high-performance computing and communications
resources. Talented,
  dedicated staff support these communities by helping them make the
most effective use of
  current and emerging technologies. U.S. industrial researchers
benefit from access to powerful
  new applications designed to take full advantage of the
metacomputer--accelerating the
  movement in American industry from using high-performance
computing and communications
  for analysis to using it for the total design process.

       The nation's future scientists are also active at the center.
As an example, teams from
  four high schools were selected through a national competition to
come to NCSA for
  SuperQuest '91, a three-week intensive high-performance computing
institute. Each student
  submitted an individual proposal describing a research problem
requiring a supercomputer.
  After returning to their high schools, these students continue
their work remotely via the
  Internet using workstations donated to the schools. Other students
also benefit because the
  workstations are used in local programs.

       NCSA is confident that these efforts of the entire national
computational community
  will transform our world by the start of the twenty-first century.

       Larry Smarr, Director


  Developing the environment

  The computational environment of the 1980s was characterized by a
small set of loosely coupled computers:
  one for processing, one for storage, and one for the user
interface. The need for better performance, greater
  speed, and more data storage, combined with ease of use, has led
to a new level of computing--the
  metacomputer. Larry Smarr, director of NCSA, describes the
metacomputer as a gigabit-per-second network
  of heterogeneous, interactive, computational resources linked by
software in such a way that it can be used
  almost as easily as a personal computer.

       The processors of a metacomputer encompass a range of
architectures: massively parallel machines,
  vector multiprocessors, and superscalar systems. A worldwide
infrastructure of interconnected computer
  networks will allow a researcher to reach out across national and
international networks and obtain
  whatever computational resources are appropriate for the research
needs of the moment.

       The development of a national scale metacom-puter, with its
attendant national file system, is a new
  mission of the four NSF supercomputing centers. Combining
intellectual and computational resources into a
  national metacenter will expedite R&D ventures in the areas of
managing large data-sets, enhancing and
  expanding the National Research and Education Network (NREN), and
developing software programs that
  allow communication between all the computing components.

  Improving experimental data analysis with the metacomputer
  Driving the technological development is a series of metacomputer
probe projects. These projects are chosen
  to span the wide range of applications used by our national
community of users and to address both
  experimental and computational approaches to science. Two
metacomputer probe projects--as dissimilar as
  mapping the skies and analyzing heart motion--are using innovative
metacomputer technology to
  manipulate observational and instrumental data.

  Typically, radio astronomers record their raw observational data
on tape and then process it weeks or
  months later to obtain images. This time delay makes it difficult
to study time-variable phenomena or to
  follow up on unexpected phenomena using radio telescopes. Richard
Crutcher (of NCSA and University of
  Illinois at Urbana-Champaign professor of astronomy) would like to
examine data from his telescope while
  his observational experiment is underway. This could be brought
about by connecting the telescope system
  directly to the NCSA metacom-puter via a long-range, high-speed
network. The telescope system is BIMA
  (Berkeley-Illinois-Maryland Array); the network is BLANCA (a
gigabit network testbed funded by NSF, the
  Defense Advanced Research Projects Agency, and AT&T through the
Corporation for National Research
  Initiatives).

  Eric Hoffman is chief of the cardiothoracic imaging research
section and associate professor of radiologic
  science and physiology at the University of Pennsylvania School of
Medicine. While at the Mayo Clinic, he
  acquired volumes of computerized tomography (CT) data from an
instrument called the Dynamic Spatial
  Reconstructor (DSR). Unlike other scanners, which reconstruct
still 3D images, the DSR collects 3D images
  consisting of 2 million volume elements at up to 60 images per
second. Hoffman has been able to visualize
  interactively his dataset describing a dog's beating heart using
the CRAY Y-MP and an SGI workstation. "At
  the moment, it's very time-consuming to build 3D images of these
beating hearts," says Hoffman. "We can
  do them now only on rare, selected cases. If it became easier,
we'd probably do it routinely, which would be
  a big plus for cardiac analysis."

       NCSA's Biomedical Imaging Group led by Clint Potter, NCSA
research programmer, is developing a
  testbed for a distributed biomedical imaging laboratory (DBIL)
that would allow biomedical instruments
  producing datasets like Hoffman's to be networked transparently to
the metacomputer.

  Simulating reality on the metacomputer
  Numerical experiments study the behavior of complex systems under
controlled conditions and ask the
  question "What happens?"--not just "What is the answer?" Therein
lies the need for the metacom-puter--
  no single computer provides the complete computational environment
for computing, storing, visualizing,
  and analyzing the data produced by very large simulations, as in
the three examples that follow.

  Princeton University astrophysicists are collaborating with
scientists at NCSA to develop computer models
  and visualizations of galaxy cluster formation in the early
universe. The team, led by Princeton Department
  of Astrophysical Sciences chair Jeremiah Ostriker and NCSA
research scientist in astronomy and
  astrophysics Michael Norman, has developed the most comprehensive
model to date. The ultimate aim is to
  create a "numerical laboratory" for physical cosmology simulations
that can address a variety of topics,
  including large-scale structure, galaxy formation, and cluster
evolution.

       Using this model, Norman and Ostriker used the NCSA CONVEX
vector multiprocessor to compile a
  simulation consisting of 100 files of data, each a snapshot in
time, containing as many as 170,000 galaxy
  positions in three dimensions. In order to navigate this 4D data,
custom software was developed for the
  Silicon Graphics 4D/360 VGX high-performance workstation at NCSA.
By mounting the CONVEX file
  systems on the Network File System, these data were read into the
SGI over the network, providing a
  seamless connection between simulation and visualization. "Using
this system, we can interactively explore
  the cluster formation process in space and time," says Norman.
"This is one aspect of the metacomputer as
  applied to this project. The entire project, from initial
simulation to video post-production," says Norman,
  "was accomplished without file transfers--an important aspect of
metacomputing."

  Jack Burns, professor of astronomy at New Mexico State University,
and Philip Hardee, professor of
  astronomy at the University of Alabama, are investigating various
aspects of radio galaxies--and in
  particular, radio jets--using the ZEUS family of codes (developed
by NCSA scientists David Clarke, Jim
  Stone, and Norman). These codes have forged many collaborations
between NCSA and various institutions
  across the country. The ZEUS codes are designed to solve the
equations of gas dynamics--including the
  effects of gravity, magnetism, and radiation--and have applications
in virtually all branches of astrophysics.
  These include stellar winds, supernovae, galactic structure, and
even cosmology. NCSA's integrated
  environment allows ZEUS simulations to be performed on the Cray
systems, CONVEX, or Connection
  Machine, depending on the requirements of the particular problem
and the researcher's approach.

  Bob Wilhelmson, atmospheric research scientist at NCSA, and his
group build a thunderstorm model from a
  set of equations that describe the dynamics of atmospheric
variables: temperature, pressure, wind speed and
  direction, moisture, water, and ice content. Wilhelmson says,
"Gigabytes of model data must be saved,
  analyzed, and displayed during and after a model simulation."

       "The concept of using two or more computers to work on a
problem has been around a long time," says
  Wilhelmson. "Since the early eighties we've used the Cray to run
model simulations. When it was finished
  and the data were stored in files, we'd transfer the files to a
high-powered workstation to do the
  visualization. Now with current technology, it's possible to
consider connecting those two computers [SGI
  4D/360 VGX and CRAY-2] and to have them working simultaneously on
their specific tasks. What we'd like
  to create is a tightly coupled system of model simulation/data
generation, analysis, and visualization, using
  multiple machines to improve throughput and exercise different
computer capabilities."

  Metacomputing and grand challenge teams
  Projects such as those described above are prototyping the
computational environment that will support the
  effort of grand challenge teams in the coming decade. Similar
experiments are underway at each of the NSF
  supercomputer centers, as well as at other federal- and state-
supported institutions--creating a single
  national information fabric. As grand challenge teams develop from
the kinds of research described in this
  report, team members will become both users and co-architects of
the emerging national metacomputer.

  Additional information about the NCSA metacomputing environment is
available in the NCSA magazine, access,
  beginning with the September-December 1991 issue.


  Researching today's challenges

  "The process of scientific discovery is, in effect, a continued
flight from
  wonder."
       -- Albert Einstein


  Beyond the big bang

  Stars and stellar evolution
  W. David Arnett and Bruce A. Fryxell
  Department of Physics
  University of Arizona

  Supernova explosions occur when a massive star exhausts its
nuclear fuel and collapses under
  its own weight. They are among the most violent events in the
universe and are extremely rare.
  On February 23, 1987, a supernova exploded in the Large Magellanic
Cloud (a satellite galaxy of
  the Milky Way), affording scientists a once-in-a-lifetime
opportunity to study the brightest
  supernova event since the invention of the telescope.

       Because of the relative closeness of the star and the high
intrinsic luminosity of supernovae,
  it was possible for astronomers to obtain an unprecedented amount
of data--not only from
  ground-based telescopes, but also from detectors on satellites,
balloons, and rockets. A burst of
  neutrinos emitted during the collapse of the star's dense core
just before the explosion was also
  detected. This data has produced many surprises, some of which can
best be explained by
  nonlinear fluid instabilities which create strong mixing in the
ejecta. To study these processes
  requires numerical hydrodynamical simulations in two and three
dimensions--challenging the
  capabilities of even the largest high-performance computers.

       For the most part, these observations confirmed the existing
theories of how a massive star
  dies. Although the exact explosion mechanism is still uncertain,
the neutrino detections verified
  that the process is initiated by the collapse of the stellar core,
forming a neutron star.
  Subsequently, the remainder of the star is ejected, constituting
the visual display seen on earth.

       Dave Arnett and Bruce Fryxell, professors of physics at the
University of Arizona, have been
  calculating the nonspherical motions, fluid instabilities, and
mixing that occurred during the
  supernova explosion by performing two-dimensional hydrodynamic
simulations on the CRAY-
  2 and CRAY Y-MP systems at NCSA. In order to resolve the extremely
intricate structures that
  develop in the flow, very large computational grids are required,
making the use of a
  supercomputer essential.

       As a result of the vast amount of data collected during the
explosion, it is clear that there are
  serious deficiencies in current models. In particular, there are
many features that are impossible
  to explain if the explosion is spherically symmetric. For example,
spherical models predict that
  the hydrogen in the outer envelope should be ejected at the
greatest velocity, while the heavy
  iron-group elements--formed near the core during the explosion--
should be moving much
  more slowly. However, observations of various spectral lines
indicate that some of the iron-
  group elements are moving at velocities in excess of 3,000
kilometers per second (km/s), while
  there is a significant amount of hydrogen at velocities less than
1,000 km/s. This situation can
  occur only if nonspherical motions take place during the
explosion, causing the original layered
  composition distribution to become mixed.

       Arnett and Fryxell's calculations indicate that the amount of
mixing obtained is not sufficient
  to explain all of the observations. In particular, it appears
difficult--if not impossible--to
  accelerate the iron-group elements to the observed velocities in
this way. This result points back
  to the uncertainties in the explosion mechanism. It now appears
that the only way to accelerate
  the heavy elements to sufficient velocities is for substantial
mixing to occur during the very
  early stages of the collapse or explosion. By investigating the
multidimensional hydrodynamics
  of these early stages, Arnett and Fryxell hope to place
constraints on the actual explosion
  mechanism for massive stars.

  (left) Composition structure four hours after the explosion. The
light blue region represents the low-density hydrogen
  envelope. The yellow and orange region is the remnant of the
helium shell which started as a thin spherical shell. The dark
  area in the center is composed of the heavier elements.

  (above) Density structures four hours after the explosion. The
flow shows the low-density bubbles separated by dense
  fingers topped by "mushroom caps" which are characteristic of the
Rayleigh-Taylor instability.

  The images were created on a Silicon Graphics 240 workstation
using software written at the University of Arizona.

  W. David Arnett (left)
  Bruce Fryxell


  "With the CAPS model in place, the nature of numerical weather
prediction
  and forecasting will be forever changed."

  At the forefront

  Toward the operational prediction of thunderstorms
  Kelvin K. Droegemeier
  Center for Analysis and Prediction of Storms
  University of Oklahoma at Norman

  Weather prediction has been identified as one of the principal
components of the Grand
  Challenge Program, and for good reason. Historically, numerical
weather prediction has been a
  driving force behind the advent of digital computing. John von
Neumann, generally regarded
  as the father of computing, realized the potential of computers in
weather forecasting and, in
  the late 1940s, created the now historical first hemispheric
forecast using the ENIAC computer
  in collaboration with meteorologists Jule Charney and Ragnar
Fjrtft.

  The Center for Analysis and Prediction of Storms (CAPS)--one of the
first eleven Science
  and Technology Centers created in 1988 by the National Science
Foundation--continues this
  tradition of innovation. CAPS' mission is to develop techniques
for the practical prediction of
  weather phenomena on scales ranging from a few kilometers and tens
of minutes (individual
  thunderstorms) to hundreds of kilo-meters and several hours (storm
complexes and mesoscale
  systems).

  Two major developments during the past few years provided the
impetus for proposing the
  creation of CAPS and for moving from a mode of storm simulation to
one of prediction. The
  first is a national multiagency effort known as NEXRAD (NEXt
generation RADar), which will
  result in the placement of some 175 scanning Doppler radars across
the continental U.S. by the
  late 1990s, providing nearly continuous single-Doppler coverage of
the scales relevant to storm
  prediction.

  The second development, stimulated by the first and now the key
element for making storm-
  scale prediction a reality, concerns techniques developed at the
University of Oklahoma for
  retrieving unobserved quantities from single-Doppler data.
Generally referred to as Single
  Doppler Velocity Retrieval (SDVR), this class of methods allows
researchers to recover, using a
  time series of a single observed wind component, the other two
wind components and the mass
  field to yield a complete and dynamically consistent set of
observations with which a storm-
  scale prediction model can be initialized.

  Kelvin Droegemeier, deputy director of CAPS, and his team are
capitalizing on these
  developments. The CAPS research effort is divided into four major
thrusts: prediction model
  development, data assimilation techniques, small-scale atmospheric
predictability, and
  implementation.

  Prediction model development. After an extensive study of existing
storm-scale simulation
  codes, CAPS chose to develop an entirely new sequence of models
known as the Advanced
  Regional Prediction System or ARPS. ARPS is designed using new
discrete operator algorithms
  that greatly simplify the code structure and enhance flexibility.
It is completely portable among
  a variety of computers, including those of the massively parallel
class. Droegemeier and his
  team have been evaluating the ARPS model for NCSA's CRAY-2,
Connection Machine Model 2,
  and IBM RS/6000 systems.

  Data assimilation techniques. The accuracy of a numerical forecast
is highly dependent upon
  the accuracy of the model's initial conditions. The time honored
computer adage "garbage in,
  garbage out" is certainly true in meteorology! One method, called
the adjoint method, involves
  adjusting the model's initial condition until the most accurate
initial state is reached. Originally
  thousands of complex iterations were required. Recent work has
shown that perhaps as few as
  50 or 100 iterations might suffice, though experiments involving
more complete model physics
  and real data are needed before the technique can be viewed as
successful.

  Small-scale atmospheric predictability. CAPS is attempting to
understand which types of
  thunderstorms tend to be the most predictable, and the degree to
which storm evolution and
  type depend upon the model's initial state.

  Implementation. The many facets of the research and development
program will be joined
  for a preliminary evaluation in 1994. Results from this
operational test will serve as a starting
  point for the future of regional storm-scale weather prediction.

  Droegemeier envisions CAPS research as culminating in a prototype
for self-contained
  regional numerical prediction centers (e.g., one per state or
every two states), each of which
  would ingest its own data from a few nearby NEXRAD systems,
perform all computing locally,
  and disseminate its information to a regional constituency.

  It is estimated that commercial airlines alone could, with
accurate regional forecasts, save
  millions of dollars each year in revenues presently lost to
unanticipated weather delays and
  traffic rerouting. Accurate forecasts of electrical storms would
aid rerouting and switching by
  power and communication utilities. Numerous benefits concerning
logistical planning would
  be available to the defense and space flight communities.

  Droegemeier is confident that, with the CAPS model in place, the
nature of numerical
  weather prediction and forecasting will be forever changed.

  (left) Different timesteps of a strongly rotating thunderstorm.
The simulation was run on the CRAY-2, and the images were
  created using the Plot-3D package on a Silicon Graphics IRIS 4D
workstation.

  (above) Temperature field of a turbulent thermal rising in a 2D
numerical model using the Piecewise Parabolic Method. The
  image was made on a Gould IP8500 system with software written at
the Uni-versity of Oklahoma.


  Where there's smoke

  Multipronged approach to fire modeling
  Kwang-tzu Yang
  Department of Aerospace and Mechanical Engineering
  University of Notre Dame

  For as long as man has known about the wonders of fire, he has
also recognized its ability to
  destroy. By learning how fire and smoke spread in confined and
ventilated spaces we hope to
  eventually develop ways to reduce fire hazards and losses in both
human lives and properties.

       It is now generally recognized that fire hazards in rooms,
passageways, and other confined
  spaces can only be reduced by a multipronged approach. One such
strategy is fire modeling,
  which develops mathematical models that describe the physical and
chemical processes of how
  fire spreads as a function of the ignition source, space geometry,
and material content.
       Once validated by experiments in small-scale laboratory tests
or full-scale fire tests, these
  mathematical models become computer-based simulation models to
determine the effects of
  significant parameters of fire-spread phenomena. The simulation
results can then be used to
  develop fire reduction measures and to provide a rational basis
for post-fire investigations. Fire
  modeling significantly reduces the need for full-scale fire tests,
which are extremely expensive
  and time consuming.

       Fire models can be categorized as either zone models or field
models. Zone models divide
  the fire-affected environment or space into distinct zones that
can be analyzed separately--
  either empirically or theoretically--in terms of input and output
information based on mass
  and energy balances. While zone models are generally
computationally efficient, they have
  shortcomings: models for some zones are not adequately known and
quantified, and the
  validity of zone models is not certain.

       Field models, on the other hand, are inherently more rational
and capable of revealing
  important large- and small-scale phenomena in a fire-spread
problem. The primary
  shortcoming of field models is that they are computationally
intensive, requiring
  supercomputers to execute numerical solutions.

       The research effort at the University of Notre Dame, led by Dr.
K. T. Yang, the Viola D. Hank
  Professor of Engineering in the Department of Aerospace and
Mechanical Engineering,
  concentrates on developing field models based on numerical
solutions to the governing
  differential field equations of the conservation of momentum,
mass, energy, and spaces.

       The field models are based on three-dimensional, finite-
difference, primitive-variable, and
  microcontrol volume time-dependent formulations. They use a high-
order differencing scheme
  for advection terms in the governing equations to minimize
numerical diffusion errors. The
  models now include the physical effects of large buoyancy,
turbulence, surface and gaseous
  radiation, wall losses, forced ventilation, partitions, and
pressurization. A combustion model
  based on laminar flamelets is being incorporated into the basic
algorithm.

       Another significant feature of the current models is that
different complex geometries of the
  fire-affected space can be accommodated without disturbing the
basic algorithm. The Notre
  Dame field models have successfully simulated full-scale fire
tests in rooms, aircraft cabins, and
  ventilated and unventilated underwater vehicles--all carried out in
a decommissioned German
  nuclear-reactor containment building.

       NCSA's CRAY Y-MP supercomputer has been used in this study to
generate numerical
  results that are compared with results of the full-scale fire
test. Comparisons have been made
  with the detailed transient temperature field inside the burn
room. A supercomputing
  environment is required because the computations deal with the
incredibly complex numerical
  solutions to the unsteady compressible Navier-Stokes equations,
the continuity equations, and
  the full energy equation (an integral-differential equation that
incorporates thermal radiation
  effects of a participating medium).

       The future direction of this effort lies in incorporating a
realistic combustion model
  compatible with other physical effects that are already in the
current models; incorporating a
  more rational turbulence model (this effort is now well under way
at Notre Dame); further
  validating field models with results from full-scale fire testing;
and recasting the algorithm for
  parallel processing (the current code is fully vectorized).

  (top left) Contours of velocity along roll cell close to the end
wall.

  (bottom left) Contours of velocity along roll cell showing central
symmetry.

  (above) Velocity vectors of rolls and contours of velocity along
roll cell.

  The images were created using Wavefront Technologies'
DataVisualizer on a Silicon Graphics 4D/25G workstation.
  (Courtesy Mike Krogh.)


  Banding together

  Coupling in high Tc superconductors
  Ronald E. Cohen
  Geophysical Laboratory
  Carnegie Institution of Washington

  Are high-temperature superconductors conventional metals, or are
they exotic, "new forms" of
  matter? Ronald Cohen is investigating this question with
colleagues Warren E. Pickett and
  David Singh at the Naval Research Laboratory, and Henry Krakauer
at the College of William
  and Mary. Using the CRAY-2 system at NCSA and the IBM 3090 at the
Cornell Theory Center,
  they are performing large scale electronic structure calculations
on the oxide superconductors
  YBa2Cu3O7-D, La2-x(Ba,Sr)xCuO4, and (Ba,K)BiO3 using conventional
band theory within the
  local density approximation (LDA). These are first-principles
calculations in the sense that no
  experimental data are used. The only inputs are the nuclear
charges and positions in the crystal.
  LDA is known to work well for most conventional metals,
semiconductors, and insulators.
  However, if the high Tc superconductors are exotic materials to
which band theory does not
  apply, LDA predictions would be expected to disagree with
experiment. In fact, there is a major
  discrepancy--LDA predicts pure La2CuO4 and YBa2Cu3O6 to be
metallic, whereas they are
  actually insulators.

       This result is one of the main reasons many researchers have
assumed that the high-
  temperature superconductors could not be treated by conventional
techniques. When doped,
  the superconductors exhibit metallic conductivity in the normal
state. Cohen has found that
  LDA correctly predicts properties of the doped, metallic
superconductors. Extensions to LDA
  appear necessary to treat the insulating parent compounds of the
high Tc superconductors, but
  the accuracy of LDA is comparable for superconductors as for other
conventional materials.

       Cohen made special efforts to highly converge his computations
to confirm that agreement
  or disagreement with experimental data is not due to numerical
approximations in the
  calculations. The complexity of the high Tc superconductors,
coupled with the requirements for
  high accuracy, makes the computations very difficult and time-
consuming.

       Three types of properties are being investigated for high Tc
superconductors: electronic
  properties such as the Fermi surface that characterizes the
quasiparticle states available for
  scattering in a conventional metal; vibrational properties such as
phonon frequencies,
  eigenvectors, and anharmonicity; and electron-phonon coupling,
which leads to
  superconductivity in conventional superconductors.

       The Fermi surface separates occupied from unoccupied electronic
states in the ground state
  of a metal, with all of the quasiparticle scattering occurring at
the Fermi surface. The shape of
  the Fermi surface governs many properties of metals. For example,
in a magnetic field the
  quasiparticles follow orbits in real space that follow the
contours of the Fermi surface. The
  Fermi surface for YBa2Cu3O7 was calculated shortly after its
discovery and more recently
  Cohen's group has performed more detailed and highly converged
calculations that emphasize
  the 3D electronic structure. Until lately it was not clear that
the superconductors even have
  Fermi surfaces. The presence of a Fermi surface and the excellent
agreement between band
  theory and experiment strongly suggest that the high Tc
superconductors are indeed
  conventional metals that can be described with the well-developed
apparatus known
  collectively as Fermi liquid theory.

       While investigating phonons in YBa2Cu3O7 and La2CuO4, Cohen
generally found good
  agreement between calculated and observed vibrational frequencies.
This indicates that band
  theory gives the correct charge density and static density
response for the high Tc
  superconductors--further evidence that they are conventional
materials. Highly anhar-monic
  modes were also studied; these are related to phase transitions
and anomalous dynamical
  properties, and also influence superconductivity.

       The goal of studying high Tc superconductors is twofold: to
determine whether these are
  electron-phonon superconductors similar to conventional low Tc
superconductors and to
  understand why Tc is so high. Cohen and his team have found
indications that the electron-
  phonon interaction is indeed very strong in the high Tc
superconductors. The main difference
  between conventional and high Tc superconductors is that in
conventional superconductors the
  interactions between atomic motions and the electrons are local.
In high Tc superconductors,
  moving an atom affects the electronic interaction on other atoms
due to the low density of states
  and ionicity in the oxide superconductors. This effect greatly
increases the electron-phonon
  coupling strength. The calculations indicate that three things are
needed to achieve high Tc
  superconductivity: a low density of states at the Fermi level,
significant ionicity, and low-mass
  atoms, such as oxygen.

       This grand challenge research in molecular and crystalline
structure and in improving and
  understanding the nature of materials could eventually lead to
using liquid nitrogen rather than
  liquid helium in semiconductor design--a substantial monetary
savings. In turn, this could
  produce more powerful, smaller motors, and smaller, faster
computers.

  (left) Calculated Fermi surface of YBa2Cu3O7. The green and red
surfaces are for the Cu-O planes and have been found by photoelectron
  spectroscopy (Arko et al., 1989); the blue and pink surfaces are
chain related and have been observed by de Haas van Alphen (Fowler et
al.,
  1991) and positron annihilation (Haghighi et al., 1991),
respectively. The image was created using SunVision on a Sun
SPARCstation 2.
  (above) Change in charge density in La2CuO4 with Oz displacement.
Displacing the oxygen changes the charge density and potential in the
  Cu-O plane, which greatly increases the electron-phonon
interaction for this mode. Data processed with DISSPLA.


  "Armed with supercomputing power, it has been possible to make
some serious
  attempts to solve the equations governing turbulent combustion and
to use
  these solutions to probe into the details of this complex
phenomenon."

  Going with the flow

  Vortex simulation of combustion dynamics
  Ahmed F. Ghoniem
  Department of Mechanical Engineering
  Massachusetts Institute of Technology

  From the flints and dry kindling used a million years ago to the
pressurized combustion
  chambers and ultralow emission fuels of today, sweeping changes
have occurred in the
  technology of combustion. Improved combustion has economic and
environmental--and thus,
  political--implications. Judicious use of fossil fuels is one
issue. Safety and health
  considerations associated with the burning of those fuels, local
and global environmental
  impacts of the combustion process, and propulsion systems for the
21st century broaden the
  picture.

       To understand the dynamics of combustion, researchers study the
physics of turbulent
  reacting flows--the tumbling eddies of gaseous oxidizers and fuels
in which, when the proper
  molecular mix has occurred, chemical energy is converted into heat
and mechanical energy.
  These studies have been conducted using experimental methods,
analysis, and more recently,
  computational modeling. The first approach is expensive and is
limited by what can be
  accurately measured in the hostile environment of intense flames.
The second approach is
  encumbered by the complexity of the mathematical equations used to
model turbulent
  combustion.

       Computational fluid dynamics, known as CFD, enables scientists
to study combustion and
  other aspects of fluid flow via supercomputing tools in a
theoretical construct. In this
  framework, rapidly evolving interacting processes--modeled by a
large set of analytically
  unsolvable equations--can be studied without the need to invoke
simplifying assumptions.

       Ahmed Ghoniem, CFD expert and professor of mechanical
engineering at MIT, has found a
  broad range of current engineering applications for his combustion
research. These include
  propulsion of hypersonic planes; improved design of utility and
domestic burners; safe and
  efficient disposal of toxic wastes in incinerators; cleaner, more
efficient automotive engines;
  reduced noise and air pollution; and fire control and suppression.
With this diversified
  applications base, Ghoniem's work is supported by many different
governmental agencies, the
  automotive industry, the aerospace sector, and the utility
industry.

       Ghoniem's approach has been to model turbulent flows in the
absence of combustion, then
  to successively introduce the additional physics that modify a
turbulent flow when combustion
  occurs--variable temperature, density and pressure effects, and
other changes due to energy
  transfer. It has been recognized that fast and efficient burning
can be achieved if turbulence is
  properly employed to promote mixing among the reacting species
without causing the
  disintegration of the reaction zone or the generation of
instabilities. With the plans to develop
  propulsion systems for supersonic aircraft in which the rates of
energy conversion are much
  higher than in traditional subsonic propulsion, reducing toxic
emissions and noise, and
  understanding the role of turbulence in combustion have become
even more urgent.

       Armed with supercomputing power, it has been possible to make
serious attempts to solve
  the equations governing turbulent combustion and to use these
solutions to probe into the
  physical details and practical implications of this complex
phenomenon. By refining their
  numerical models, Ghoniem and his colleagues hope to develop
better engineering tools for
  optimizing the design of combustion devices. Having access to
these powerful tools will benefit
  both science and industry. Researchers can investigate new
concepts, test the validity of new
  ideas, and demonstrate the applicability of new inventions.
Industry hopes to reduce the time
  and cost of designing new products--a process that currently relies
heavily on traditional
  methods. This concerted effort should improve national
productivity and competitiveness.

       The leap toward modeling such complex phenomena as those
encountered in turbulent
  combustion was made possible through the availability of
supercomputers. The vast memory
  and enormous speed of these machines are indispensable in carrying
out computations that
  represent rapidly evolving, spatially tangled physical and
chemical processes such as
  combustion in automobile engines. Numerical methods that take full
advantage of the
  computer architecture, while maintaining accuracy and stability,
continue to improve the
  interface between physical reality and the computer. Finally, by
using visualization hardware
  and software, scientists and engineers are able to interpret
computer output in familiar terms.

  (left and above) Turbulent mixing. Stills from a simulation show
development of a complex, 3D structure in a turbulent
  mixing layer into a chemical reaction between two streams of fuel
and oxidizer. Two cross sections of concentration are
  shown. Red/yellow indicates high concentration; blue/green low
concentration. The images were processed from CRAY
  Y-MP data using NCAR software on a MicroVAX II computer.


  Quantum leaps

  Electronic properties of clusters and solids
  Marvin L. Cohen and Steven G. Louie
  Department of Physics
  University of California, Berkeley

  One of the goals of condensed matter physics since the late 1920s
has been to explain and
  predict the properties of solids using only quantum theory and
input information about the
  constituent atoms. Marvin Cohen and Steven Louie, professors of
physics at the University of
  California at Berkeley, have recorded a number of firsts in this
area. They address the question
  of how solid-state electronic and structural properties evolve
from the properties of the
  constituent atoms. By studying the properties of combined units--
atoms, molecules,
  microclusters, fine particles, and bulk solids--they can explore
how electronic properties
  change with size and complexity.

  Much of this research relies on calculations requiring numerical
solutions to problems about
  strongly interacting particles in real materials. These
calculations require hundreds of hours of
  Cray CPU cycles and megawords of computer memory. Several first-
principles quantum
  approaches are used. The numerical algorithms include the repeated
manipulation of large
  matrices (involving dimensions of thousands) and extensive use of
three-dimensional fast
  Fourier transforms and Monte Carlo sampling schemes. The
complexity of the calculations is
  usually a strong function of the number of atoms contained in a
unit cell of the crystal.

  These methods have proven to be highly accurate and capable of
predictive power. A host of
  solid-state properties have been obtained with these methods.
Among these are structural
  information, surface and interface characteristics,
superconducting transition temperatures, and
  phase stability. In addition to contributing to the understanding
of existing materials, Cohen
  and Louie hope to successfully predict the existence of new
materials not previously found in
  the laboratory. One proposed material is comprised of gallium and
arsenide. By controlling the
  ratio of Ga to As they might control the degree to which the
material is a conductor.

  The future direction of this work depends on both theoretically
motivated problems and
  experimental discoveries. Cohen and Louie hope to respond to both
and make contributions
  that are potentially useful to experimentalists and theorists.
Their theoretical calculations can
  simulate conditions that are not easily obtainable. Their plans
for the near
  future are to examine iron, iron oxides, and possibly iron
hydrides under pressure. These
  materials are important for both solid state physics and for
geophysics, but they are difficult to
  examine theoretically.

  They are also exploring the unusual properties of materials based
on elements in the first
  row of the Periodic Table. Examples include the hardest materials
known (diamond and boron
  nitrite). Once they compute the structural properties of known
compounds, they expect to
  propose new compounds based on these elements. Calculations will
be done of composite
  materials to look for supermodulus effects, which cause materials
to be less compressible than
  their components. They have recently proposed that it may be
possible to fabricate a carbon-
  nitrogen compound that has a hardness comparable to or greater
than that of diamond.

  A new theoretical tool based on quantum Monte Carlo simulation has
been developed by
  this group for looking at many electron effects. This approach
goes beyond the standard self-
  consistent field theories for solids and opens up a brand new
direction for research on the
  electronic properties of real materials. They plan to apply this
technique to study metallic
  hydrogen, transition metals, and transition metal oxides.

  Cohen and Louie's group has been using computers since the early
1960s. Says Cohen, "the
  supercomputer is a wonderful tool for exploring scientific and
technical problems. It is an
  intellectual aid. Often the simplest physical ideas elude us until
we do a 'full-blown' calculation.
  Then, we are in the position to use hindsight to say, 'Why didn't
we think of that to begin with?'
  So, in addition to the more obvious use of supercomputers for
applying theories, number
  crunching, simulations, and computerexperiments, often overlooked
is the use of doing some
  numerical calculations to 'gain insight.'"

  (far left) Calculated magnetization density distribution in bcc
iron in a (110) plane. The
  atoms are located at the hollow centers.  The 3D image was created
using MONGO software on
  a VAX/VMS workstation.

  (left) Electron charge density distribution of the material from
(above) image.

  (above) Ball and stick model of the structure of a proposed new
material containing gallium
  (Ga-small spheres) and arsenide (As-large spheres).

  (above) and (left) Images were created with software written at
UC-Berkeley.

  Marvin L. Cohen (seated)
  Steven G. Louie


  ". . . supercomputers allow us to take a more experimental
approach to our
  problems and try out many ideas. This sort of exploration is very
helpful in
  problems that are as wide open as protein folding."

  Cracking the protein folding code

  Protein tertiary structure recognition
  Peter G. Wolynes
  Department of Chemistry
  University of Illinois at Urbana-Champaign

  Almost since their invention, computers have been used to try to
break codes. A grand
  challenge code scientists seek to understand is that which
determines the folding and three-
  dimensional structure of a protein, given its sequence. In turn,
detailed mechanisms of a
  protein's function can be worked out only when the structure is
known. Thus a lack of
  understanding of the protein folding code is a major impediment to
many areas of molecular
  biology.

  One of the great surprises in this area is that many, if not all,
protein molecules can find an
  organized but complex structure spontaneously. It appears, then,
that the relation between
  sequence and structure is a consequence of the intermolecular
forces. However, fully mapping
  these interactions is a major challenge since the technology of
finding the sequence of proteins is
  quite sophisticated, and determining their structures remains a
difficult experimental task.

  Peter Wolynes, professor of chemistry at the University of
Illinois at Urbana-Champaign, has
  developed some simple models of proteins as information processing
units that provide a
  schematic view of folding. Experimental studies of proteins
suggest that the code is very
  degenerate and robust--many errors can be made in a sequence
pattern but the final structure
  remains essentially the same. This feature suggests an analogy to
brains and neural networks
  where very fuzzy patterns can be recognized. Wolynes and his team
have developed energy
  functions that embody the known patterns of protein structures.
The energy functions are
  determined in a way analogous to that used in neural biology where
patterns reinforce certain
  interactions between neurons. The associative memory polymer
Hamiltonians developed by the
  Illinois researchers are closely related to models used for very
simple neural nets.

  The polymers in this research have many elements of reality so
that, in a schematic way, the
  folding process that is carried out may resemble the real thing.
At the same time, because the
  folding patterns filter out mostly relevant information, the
folding process can be greatly
  speeded up with the computer so that it happens in the analog of
nanoseconds rather than the
  millisecond-to-second range of real processes. This approach
should give insight into the
  relationship of thermodynamics to the dynamics of protein folding
and may provide practical
  algorithms for predicting low-resolution structures of proteins
when only their sequence is
  known.

  Wolynes and his group use NCSA's CRAY-2 and CRAY Y-MP high-
performance computers
  for their folding calculations. Their algorithms are basically
molecular dynamics simulations
  and Monte Carlo calculations. They also use methods closely tied
to neural networks such as
  back propagation for training auxiliary networks.

  In all of these applications, the amount of data handled is quite
large because the entire set of
  protein crystal structures is used to build the energy functions.
And the molecular dynamics,
  roughly equivalent to one nanosecond of real time, translate into
an hour or so of Cray time for
  a single trial guess at the folding code used in the energy
function. These studies--requiring the
  continuous interplay of hypothesis and testing--would be impossible
with slower computers.

  By focusing on determining structure from sequence alone, this
approach has many
  connections with other problems of sequence analysis that enter
into the human genome
  project. Recognizing protein structures from sequence would be a
catalytic intermediate step
  between the determination of sequence and function.

  More abstract studies of protein folding being carried out in this
context may help in
  understanding folding as it occurs in living cells. Defects in the
folding process have been
  implicated in a variety of diseases, including Alzheimer's
disease.

  Wolynes' future plans include pursuing two biologically oriented
directions. One employs
  more sophisticated sequence comparison algorithms to screen data
for the associative memory
  approach, and the other uses further analogies to higher order
neural processing to achieve
  better folding predictions.

  Supercomputers have greatly affected how the group does research.
Says Wolynes, "I still
  believe that analytical studies provide great insights into
physical and biological processes, but
  to take analytical ideas and translate them into useful results
often involves large-scale
  computations. These can only be carried out with supercomputers."

  The images show the overlap of a predicted protein tertiary
structure with the actual x-ray
  crystal structure of the cytochrome proteins 351C (above) and ICCR
(left). The algorithms
  used to predict these structures utilize analogies drawn from
other complex systems, e.g.,
  associative memory Hamiltonian neural network schemes. The
computations were performed on
  the CRAY-2, and the images were created using Wavefront
Technologies' DataVisualizer.


  "[Certain] problems I am working on now can only be approached by
computer
  simulations. [High-performance] computers have provided the means
to
  analyze problems experimentally or analytically untractable, to
confirm
  theories, and to generate models that explain experimental
results."

  Adrift in an electric sea

  Linear and circular polymer gel electrophoresis
  Monica Olvera de la Cruz
  Department of Materials Science and Engineering
  Northwestern University

  A basic question of the human genome project is how to sequence
human chromosomes--
  molecular complexes containing millions of precisely ordered
units. Monica Olvera de la Cruz,
  associate professor of materials science and engineering, and
Northwestern University graduate
  student Dilip Gersappe have been tackling this problem. In
particular, they are studying gel
  electrophoresis, an important experimental technique widely used
in modern molecular biology
  for separating molecules according to size.

  The simplest form of the technique consists of applying a constant
electric field to a gel (a
  three-dimensional random network) that contains the charged
molecules of interest. After a
  period of time, chains of different sizes separate physically in
the gel. The shorter the chain, the
  faster it migrates in the applied field direction. Unfortunately,
the basic technique can only
  separate DNA of roughly 30,000 base pairs (the molecular size unit
of DNA). The rest remain
  clumped in a tangled web.

  The first approach to solving the problem is to understand the
separation process and
  determine why DNA chain mobility becomes independent of molecular
size. The equations of
  motions of a long chain drifting through a network are too
intractable to be solved analytically.
  And simplified diffusion models cannot be constructed because the
shape of the molecule while
  drifting is unknown; whether the chain is stretched or contracted
in the presence of an external
  field has to be found by solving the dynamics.

  In an effort to understand the process and optimize the separation
technique, the
  Northwestern researchers investigated the chain dynamics using a
detailed off-lattice computer
  simulation of the process. Unlike lattice or grid models which
assume the motion of polymers
  can be broken into discrete chunks, off-lattice is a continuum
model where this assumption is
  not made. The team found that although the mobility of long chains
is a constant that is
  molecular-size independent, the chains undergo cyclic oscillations
from contracted-to-stretched
  conformations that are molecular-size dependent. The longer the
chain, the larger the cycle and
  amplitude of the oscillations.

  These changes in conformations can be used to separate longer
chains by using pulsed field
  rotations. The group examined the effects of alternating the field
direction between forward and
  transverse (orthogonal pulsed field gel electrophoresis) as a
function of molecular size. They
  found that mobility is reduced due to orientation effects and that
the longer the chain, the larger
  the reduction in mobility. Therefore, many chains that are
unresolved by constant field gel
  electrophoresis can be separated by pulsing the field. The
reduction in mobility saturates for
  very long chains, however, suggesting a window of molecular sizes
for which the resolution is
  maximum for a fixed pulsed rate.

  The agreement of this theoretical work with the experimental
observations of others has
  produced a totally revised model for the dynamics of pulsed gel
electrophoresis, leading to an
  increased understanding of the limitations and ways of improving
the separation technique.

  The Northwestern group has also analyzed, in collaboration with J.
M. Deutsch, professor of
  physics and astronomy at the University of California at Santa
Cruz, the statistics of polymer
  chains in random environments. In a porous medium, this comprises
the initial environment in
  a separation process such as a gel electrophoresis. In the absence
of an external field the linear
  chains are unaffected by the presence of the random environment.
However, the time average
  monomer density (TAMD)--a measure of the frequency at which a point
is visited by a
  monomer--was found to have huge fluctuations from point to point.
In the absence of
  impurities, the TAMD is a constant because all regions are equally
probable to be visited by the
  monomers. In a random medium, however, the TAMD is a multifractal
measure; i.e.,
  fluctuations are so large and self-similar that an infinite number
of exponents associated with
  the moments are required to characterize the distribution.

  The research group is currently studying gel electrophoresis of
molecules with various
  topologies, such as circular molecules. These studies may lead to
new methods for
  characterizing and separating polymers of various topological
configurations, such as those
  observed in closed circular DNA. With high-performance computers
such as the CRAY-2 as a
  basic tool for scientific development, Olvera de la Cruz and her
team can continue to probe the
  mysteries surrounding the human genome and generate models that
explain experimental
  results.

  (left, from top to bottom) Chain conformations during gel
electrophoresis. (1) Initial
  conformation of the chain occurs in an array of obstacles. (2) In
the presence of an
  electric field, closed J (hooks) conformations result. (3) The
chain opens up when the
  field direction is rotated. (4) Open U shape conformations result
after a period of time.

  (above) Time average monomer density (TAMD) in a random medium,
where 20% of the lattice
  sites are occupied by impurities. Periodic boundary conditions are
used in a box of 64 x 64
  lattice sites for a chain of 120 monomers. The image was processed
from CRAY-2 data using
  Mathematica on a Sun SPARCstation 2.


  The cortical connections

  Simulation of complex neural tissues
  Klaus J. Schulten
  Department of Physics
  University of Illinois at Urbana-Champaign

  If we are to improve our understanding of the brain, we must
observe its structure and activity.
  Even the simplest brain functions involve the activity of
thousands of neurons whose
  simultaneous observation is beyond our means. To cope with this
immense problem, a
  combined research approach involving observation and simulation of
complex neural tissues
  has emerged. An example of this new method is the study of the
representation of optical
  images in the visual cortex of macaque monkeys by Professor Klaus
Schulten and research
  assistant Klaus Ober-mayer at the Beckman Institute for Advanced
Science and Technology at
  UIUC, and Professor Gary Blasdel of Harvard Medical School.

       Using voltage-sensitive dyes, Blasdel observed the electrical
activity of hundreds of
  thousands of nerve cells in a contiguous brain region of a macaque
monkey. He also noted that
  this activity in area 17 of the visual cortex depends on the type
of images presented to the
  monkey. The conversion of such images into electrical cell
activity allows for the monitoring of
  important elements of cortical organization--the visual map. Such
maps are not genetically
  specified in detail, but rather are acquired by an animal during a
self-organizing process
  involving  visual experience. In the course of the development of
a visual map, the cortical areas
  modify their connections to the retinas of the eyes. Each square
millimeter of cortical area
  consists of 100,000 or more neurons, with each neuron having 1,000
or more connections.

       Schulten and coworkers are simulating the evolution of visual
maps in young monkeys and
  young cats. Since the simulation involves the variation of
30,000,000 synaptic connections and is
  accompanied by millions of visual images, the computational task
is enormous. They have
  found NCSA's Connection Machine Model 2 (CM-2), with its 32K
processors, ideally suited for
  their simulations. Particularly useful is the DataVault of the CM-
2, which allows storage of the
  intermediate developmental stage data. The researchers have now
obtained results of many
  different visual maps.

       The research addresses the following questions: How does nature
string the millions of
  connections between the eyes and the brain through the optical
nerve tract? And why does
  nature prefer the part-icular representation of images observed
for cats and monkeys over other
  possibilities? The simulations on the CM-2 show that a small set
of simple developmental
  rules--all well within the known property range of neural systems--
together with suitable
  visual experiences, suffice to develop the maps as they are
observed. In fact, the simulations
  and observations agree so closely that it is sometimes hard to
tell them apart. A more detailed
  analysis of the visual maps generated and observed shows that the
representation of optical
  images achieved in the visual cortex combines a variety of image
attributes: location of the
  stimulus in the visual field, orientation of line segments in the
image, stereo information, color,
  and texture.

  Why the particular presentation shown? It appears that nature
tries to combine many image
  attributes while at the same time preserving their continuity. To
appreciate this, one should
  realize that the visual maps, mathematically speaking, establish a
connection between a many-
  dimensional space of image attributes (each image attribute adding
one dimension) to a merely
  2D cortical area. The advantage of maximizing the conservation of
continuity through the visual
  maps is twofold: first, a continuous change of attributes, such as
those representing a moving
  object, is presented to the brain by a change of activity that is
also nearly continuous; second, to
  process images for higher cognitive tasks requires that the map
simplify the activities of
  neurons needing to compare their signals by locating them close
together.

       Image processing in brains beyond the level of visual maps is
the subject of future
  investigations. Observations have given strong indications that
the coherences in the firing
  patterns of cortical neurons provide higher brain areas with
information concerning which
  segments of an image make up a visual scene--that is, which parts
of the image are objects (for
  example, an animal), and which belong to the background. The next
generation of simulations
  in computational neural science promises to be even more exciting
than the present since it will
  lead us--together with observations--to an understanding of simple
cognition. Such
  simulations must account for the dynamics of hundreds of thousands
of neurons with their
  millions of connections; present simulations only describe the
static connectivity scheme.
  Schulten and his colleagues are waiting eagerly for the next
generation of massively parallel
  machines with a thousandfold performance increase which they hope
will allow them to
  address the cognitive processes of vision.

  (left) Observed (left side) and simulated (right side) brain maps
are compared. There are actually two maps shown: one in
  color representing the sensitivity of cortical neurons to
orientation, i.e., (top row) red shows cells sensitive to vertical
lines,
  green shows cells sensitive to horizontal lines; and one in grey
(bottom row) showing the sensitivity to input from the right
  (light) and left (dark) eye. All refer to the same brain area.

  (above) Simulated brain map showing cells sensitive to vertical
lines (red) and those sensitive to horizontal lines (green).

  These simulations were run on the CM-2 frame-buffer, and the
images were created using C/Paris.


  "It [the supercomputer] allowed me to obtain conclusive answers
fairly rapidly
  to a specified task by using several alternative techniques."

  Turning up the heat

  Development of phased arrays for hyperthermia
  Emad S. Ebbini
  Department of Electrical Engineering and
       Computer Science
  The University of Michigan

  Hyperthermia cancer therapy is an experimental method that is
being investigated extensively
  by a number of researchers. Although not new, interest in
hyperthermia has been rekindled due
  to some new techniques--particularly tissue culture--that allow
biologists to investigate how
  heat alone or with radiation can kill cancerous cells. It turns
out that cells are quite sensitive to
  heat in the part of the cell cycle when they are most resistant to
radiation.

       Hyperthermia cancer treatment depends on raising the
temperature of a tumor to about 43C
  for an hour or more. The difficulty is heating the tumor
adequately without damaging nearby
  healthy cells. Some tumors can be heated easily with available
technology; others cannot--
  because of location, because of a peculiar shape, or because they
happen to be highly perfused
  with blood. Increased blood flow, which counteracts the desired
effect by cooling the tissue, is
  the body's response to a rise in temperature. To get an optimum
temperature distribution
  requires a changing energy deposition during treatment--not a
trivial thing to do.

       Several methods are currently used to heat tissue for cancer
therapy. One technique uses
  microwave energy, but this is not ideal for tumors deep in the
body. When the microwave
  frequency is low enough for deep penetration, the wavelength is so
long that heating is difficult
  to focus or control. Another technique uses ultrasound. The
advantage of ultrasound is that
  frequencies that can penetrate deep inside the body have a
relatively short wavelength that can
  be precisely controlled.

       Emad Ebbini, professor of electrical engineering and computer
science at the University of
  Michigan, is working on a project that uses computer simulations
for the analysis, design, and
  optimization of novel ultrasound phased-array applicators for
hyperthermia cancer therapy.
  The parent project, headed by Charles A. Cain, director of the
University of Michigan's
  Bioengineering Program, investigates the use of phased arrays for
hyperthermia from the basic
  hypothesis that phased arrays are potentially capable of producing
precisely defined heating
  patterns tailored to the tumor geometry even in the presence of
tissue inhomogeneities. In fact,
  only phased-array applicators have the potential for dynamic and
adaptive focusing deep into
  the body through tissue inhomogeneity. Furthermore, phased arrays
offer the promise of
  versatile and flexible applicator systems that focus and steer the
ultrasonic energy electronically
  without the need to mechanically move the applicator heads. By
greatly simplifying the
  machine-patient interface, the clinical use of hyperthermia could
be enhanced. Phased arrays
  can also directly synthesize multiple-focus heating patterns, thus
tailoring hyperthermia to the
  tumor geometry.

       This research is currently moving into optimization of site-
specific applicator systems to
  treat specific tumor types. Supercomputer simulations, using the
NCSA CRAY Y-MP system,
  will be used with 3D datasets of patients' anatomy from
computerized tomography (CT) or
  magnetic resonance image (MRI) scans. Geometric, acoustic, and
thermal optimization
  computations will also be performed. In a clinical setting, most
of these computations will be
  performed on workstations. However, in this development stage, the
supercomputer will
  continue to be an invaluable asset to this research program.

       While this work is primarily concerned with the development of
phased-array applicators
  for hyper-thermia, its results will also be applicable to coherent
imaging systems, image
  synthesis, etc. One exciting application is the activation of
certain anticancer agents by
  controlling the ultrasound within precise spatial and temporal
boundaries.

  (left) Novel site-specific phased-array applicator designs with
conformal geometries are optimized, both geometrically and
  acoustically, based on 3D data-sets of patients' anatomy from CT
or MRI scans. The spherical geometry of the array shown
  heats a prostatic tumor. Usable array elements are shown in blue.
Red indicates elements obstructed by bone, and white
  indicates elements obstructed by gas spaces (both bone and gas are
strong scatterers of ultrasound). The images were
  created using SunVision on a Sun SPARCstation 2.

  (above) Phased arrays can be used to directly synthesize complex
multiple-focus beam patterns overlaying tumor geometry.
  This image shows an intensity profile produced by simultaneously
focusing the spherical array on 30 points of the image at
  left. The image was created using IDEAL on a DEC 3100 workstation.


  "Young people don't have scientific role models; they don't see
scientists as
  heroes. Yet their talents may lie in scientific pursuits, and they
may find
  tremendous personal and career satisfaction in science. Programs
like
  SuperQuest, which allow students to witness science firsthand, are
what we
  need to foster a future generation of scientists and engineers in
our country."
  --David Ruzic, mentor for SuperQuest students and associate
professor in the Department of
  Nuclear Engineering, University of Illinois at Urbana-Champaign


  Educating tomorrow's scientists

  Using high-performance computing resources to solve grand
challenge problems in science and
  engineering has a parallel in addressing a challenge perhaps even
more far-reaching in its
  societal impact. That challenge--to encourage high school students
to pursue interests, and
  eventually careers, in the sciences--is addressed, in part, by
SuperQuest, a national
  computational science competition for high school students.

  Fourteen students from four high schools and their teacher-coaches
attended NCSA's first
  Super-Quest Institute in July 1991. The SuperQuest competition is
based on student research
  proposals from participating high schools. The proposals are
evaluated by a national panel of
  researchers on the basis of scientific merit, clarity, suitability
for supercomputer solution, and
  the student's previous work. Winning teams spend three weeks of
intensive training at a
  SuperQuest Center. NCSA is currently one of three places that
provide such an opportunity: the
  other two are the Cornell Theory Center and the University of
Alabama at Huntsville.

  During the three weeks at the SuperQuest Institute, students
worked on their own projects
  and learned how to write code for the supercomputers. Science
mentors provided additional
  guidance in their studies. The students also attended talks on
supercomputing topics including
  numerical methods, parallelization, symbolic manipulation,
networking, and visualization.
  Teacher-coaches had as much to learn as the students. They
attended the same lectures and
  were given private sessions on networking and mathematical
software.

  On returning to their high schools, students and their teacher-
coaches continue their research
  by remotely accessing NCSA's supercomputers, as well as other
supercomputer centers, using
  donated workstations and network connections. Other students and
teachers will benefit as
  well, because the workstations will continue to be used in local
computational science
  programs. When their yearlong projects are completed, the students
compete among
  themselves in the Best Student Paper Competition. The students
also give talks on their projects
  at professional scientific meetings. Details on several of the
SuperQuest projects follow.

  When two balls collide . . .
  When Patrick J. Crosby plays pool, he probably has more on his
mind than sinking the eight
  ball. Crosby, a student from Evanston Township High School in
Evanston, Illinois, is using
  NCSA's CRAY-2 supercomputer to construct an analytical model of
the collisions of spherical
  balls--a problem that has puzzled physicists for years. Through
physical experiments and
  computer simulations, Crosby is analyzing the contact time of the
two spheres under varying
  impact conditions.

  Crosby's computer model is based on many mass points, connected by
molecular springs, in
  a spherical lattice structure. Using the basic laws of physics and
a mathematical approximation,
  he determines the position, velocity, acceleration, kinetic
energy, and potential energy of each
  mass point, and of the entire structure, as a function of time.
The model is versatile enough to
  be used to study other aspects of the collisions of spheres,
including the deformations that occur
  upon contact and the energy distribution during deformations. Only
crude approximations
  could be made with the use of a personal computer, according to
Crosby.

  Crosby's research has led him to the finals in the highly
competitive Westinghouse Science
  Talent Search. He is one of only 40 students (chosen from 300
semifinalists nationwide) who
  holds the honor. The winner of the competition is awarded a
$40,000 scholarship, as well as a
  good measure of prestige.

  From wave tracing to thermal images
  Crosby's SuperQuest teammate Doran Fink, also a student from
Evanston Township High
  School, has already reached one of those milestones that make
science exciting: he has solved
  his original Super-Quest problem, and moved on to another. For his
first SuperQuest topic, Fink
  used the CRAY-2 to determine the shape and position of a wave
front and the velocities of its
  individual points as a function of the density of the medium. This
method could be used to
  determine the path of a seismic wave front as it propagates
through the earth, a sonar wave
  front as it propagates through an ocean, or a light wave front as
it propagates through a strand
  of fiber optics material.

  Thanks to the network connection to NCSA, Fink has begun a new
research project, again
  using NCSA's CRAY-2. He is interested in thermal images--those
images produced when a
  heat source of a certain shape is applied to a solid material.
(Doctors use various thermal
  imaging methods to scan for tumors and analyze organ performance.)
Fink is conducting
  physical experiments and computer simulations to determine the
equilibrium depth of a
  thermal image in an insulated, homogeneous solid. He has found
that the depth of the image is
  independent of the temperature of the impressed image, and that it
can be augmented if higher
  conductivity materials are placed within the solid.

  Fink suggests that this phenomenon could be used as a
nondestructive test for weld joints:
  any distortions in the thermal image of the joint would indicate
low conductivity regions
  (perhaps air pockets) within the weld joint. Fink advanced to the
semifinals in the
  Westinghouse Science Talent Search for his efforts in this
project.

  Modeling an x-ray telescope
  Choosing a topic related to astronomy was perhaps natural for
Tracy Speigner, a junior from
  the J. Oliver Johnson High School in Huntsville, Alabama.
Huntsville is the site of NASA's
  Marshall Space Flight Center. Speigner is using NCSA's Connection
Machine Model 2 (CM-2) to
  model a novel x-ray imaging telescope. X-rays, emitted from
otherwise invisible astronomical
  objects, are detected through devices carried on balloons or
satellites. Computer imaging
  techniques translate those signals into meaningful pictures. She
is running Monte Carlo
  simulations to determine the behavior of photons in the telescope.

  In addition to exploring the world of computational science and
supercomputing, Speigner
  has had some remarkable opportunities to meet new people and
expand her horizons. Along
  with her teachers and another SuperQuest teammate, she traveled to
Washington, D.C. to
  participate in a demonstration of her project to the National
Science Foundation. Given the
  "returning hero" status accorded SuperQuest students by many of
their peers, Speigner and the
  girls participating in SuperQuest are perhaps especially important
as role models to other girls
  interested in computational science.

  Predicting traffic jams
  From Los Angeles to New York City, countless motorists stuck in
traffic jams have wondered
  whether anything could be done about the commuter's curse. Patrick
Chan, a student at the
  James Logan High School in Union City, California, may not be able
to prevent traffic jams, but
  his SuperQuest project is an attempt to understand the
predictability of the snarl.

  Using elements of network theory, Chan first constructed a
mathematical model of the traffic
  flow on a freeway on the CRAY-2 supercomputer. He used his model
to simulate traffic flow
  given the conditions of a traffic accident, road construction, or
simply too many cars. By
  analyzing the data from his simulation, Chan hopes to determine
the relationship between road
  conditions and traffic jams. He hopes that the results of his
study can be used to help
  commuters plan their routes around traffic jams, or to assist the
highway department in
  determining ideal times for construction projects.

  Modeling fractal growth
  Daniel Stevenson, a student at Hudson High School in Hudson, Ohio,
is using the CM-2 and
  CRAY-2 supercomputers to study the growth processes of natural
fractal structures--frost
  patterns, electrochemical deposits, and biological structures, to
name a few.

  Stevenson took a simple fractal approach to modeling natural
growth processes--called
  diffusion limited aggregation (DLA)--and added some conditions to
make the model more
  closely approximate reality. The simple DLA approach assumes
infinitely dilute concentrations
  of particles randomly moving around until attaching to a central
cluster. Steven-son's
  conditions included the effects of cluster geometry on the rate of
surface growth, and the
  diffusion of particles away from the surface. Under these extended
DLA conditions, Stevenson
  hopes to determine characteristic geometries of computer-simulated
fractal structures, and to
  compare them with naturally occurring fractal structures.

  Stevenson was chosen to represent the Super-Quest program at the
Supercomputing '91
  Conference in Albuquerque, New Mexico. He gave a paper and, along
with his SuperQuest
  teammates, presented a poster.

  Expanding HPCC in education
  SuperQuest is part of a larger NCSA effort to expand the role of
high-performance computing
  in education at local, state, and national levels. Computational
science allows the students to
  explore real scientific problems, conduct real scientific
experiments, and to discover the joy of
  scientific research. "One of the problems in science education is
that the students are talked to
  about science," said Nora Sabelli, assistant director of education
at NCSA. "That doesn't teach
  science. . . . You learn science by doing science. In the next ten
years, computer modeling and
  mathematical experimentation will be an accepted mode of forming
scientific hypotheses, along
  with theory and physical experimentation. The sooner we introduce
young people to
  supercomputers, the better."

  (top) Hudson High School participants.

  (bottom) J. Oliver Johnson High School participants.

  (top) James Logan High School participants.

  (bottom) Evanston Township  High School participants.

  Evanston Township
  High School
  Evanston, IL
  David A. Dannels,
       teacher-coach
  Patrick J. Crosby
  Doran L. Fink
  Sarah Hayford
  Paul Lewis

  Hudson High School
  Hudson, OH
  Vaughn D. Leigh,
       teacher-coach
  Andrew E. Goodsell
  Daniel Stevenson
  Jeremy Stone

  James Logan High School
  Union City, CA
  Charmaine Banther,
       teacher-coach
  Patrick Chan
  Francis Eymard R. Mendoza
  Francis Michael R. Mendoza
  Benjamin T. Poh

  J. Oliver Johnson
  High School
  Huntsville, AL
  Sharon Carruth,
       teacher-coach
  Melissa Chandler
  LaShawna Morton
  Tracy Speigner


  Executive Editor     Paulette Sancken

  Associate Editor     Mary Hoffman

  Managing Editor      Melissa LaBorg Johnson

  Designer     Linda Jackson

  Contributing Editors Fran Bond
       Stephanie Drake

  Copy Editor  Virginia Hudak-David

  Contributing Writers Fran Bond, Jarrett Cohen, Mary Hoffman, Sara
Latta,
       Paulette Sancken, the researchers and their assistants

  Photographers        ThompsonoMcClellan Photography

  Printer      University of Illinois Office of Printing Services
       Printing Division, March 1992



  Disclaimer: Neither the National Center for Supercomputing
Applications nor the United States
  Government nor the National Science Foundation nor any of their
employees makes any warranty or
  assumes any legal liability of responsibility for the accuracy,
completeness, or usefulness of any
  information, apparatus, product, or process disclosed, or
represents that its use would not infringe privately
  owned rights. Reference to any special commercial products,
process, or service by trade name, trademark,
  manufacturer, or otherwise, does not necessarily constitute or
imply endorsement, recommendation, or
  favoring by the National Center for Supercomputing Applications or
the United States Government or the
  National Science Foundation, and shall not be used for advertising
or product endorsement.

  All brand and product names are trademarks or registered
trademarks of their respective holders.


  National Center for Supercomputing Applications
  605 East Springfield Avenue
  Champaign, IL 61820-5518
  (217) 244-0072