RISKS-LIST: RISKS-FORUM Digest  Tuesday 15 November 1988   Volume 7 : Issue 78

       FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
 Computers in Elections (PGN)
 Risks in econometric models (Ross Miller)
 Report on SAFECOMP '88 [long] (Tim Shimeall)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, and nonrepetitious.  Diversity is welcome.
CONTRIBUTIONS to [email protected], with relevant, substantive "Subject:" line
(otherwise they may be ignored).  REQUESTS to [email protected].
FOR VOL i ISSUE j / ftp kl.sri.com / login anonymous (ANY NONNULL PASSWORD) /
 get stripe:<risks>risks-i.j ... (OR TRY cd stripe:<risks> / get risks-i.j ...
 Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85),(6,95).

----------------------------------------------------------------------

Date: Tue, 15 Nov 1988 11:10:34 PST
From: Peter Neumann <[email protected]>
Subject: Computers in Elections

Readers of the references given in RISKS-7.52 to 54, and 7.70 to 71 (the New
Yorker article by Ronnie Dugger, and reports by Roy Saltman; Lance Hoffman;
Bob Wilcox and Erik Nilsson; and Howard Strauss and Jon Edwards) know that
at least five past elections have been legally challenged on grounds of
fraud.  In all of these cases, the same company (BRC, formerly CES) provided
the computing services.  The lawsuit in Indiana is still in process.

The latest item on the integrity of computers in elections relates to this
year's Senate race in Florida.  The New York Times (Saturday, 12 Nov 88, page
9) had an article by Andrew Rosenthal on suspicions of fraud arising from the
results.  At the end of the Election Day ballot counting, the Democrat Buddy
Mackay was ahead.  After the absentee ballots were counted, the Republican
Connie Mack was declared the winner by 30,000 votes out of 4 million.  However,
in four counties for which BRC provided the computing services, the number of
votes counted for Senator was 200,000 votes less than the votes for President
(i.e., 20% less), while in other counties and in previous elections the two
vote totals have generally been within 1% of each other.  Remembering that
these computer systems reportedly permit operators to turn off the audit trails
and to change arbitrary memory locations on the fly, it seems natural to wonder
whether anything fishy went on.  I hope that our Florida readers will keep us
informed of any further developments.

------------------------------

Date: Mon, 14 Nov 88 16:36:19 EST
From: [email protected] (Ross Miller)
Subject: Risks in econometric models

On the front page of the Sunday N.Y. Times, Peter Neumann raises a computer
security related risk that I have not seen discussed before.  At the end of his
piece on describing the potential for viruses spreading, he states, "Do we know
the econometric models of the country are correct, for example?"  As a once and
sometimes econometrician (who does microeconomic rather than macroeconomic
work), I found this question to be one that is worth examining.

The recent defeat of Michael Dukakis was probably caused in part by the
well-publicized fiscal problems of Massachusetts.  Did George Bush introduce a
computer virus into the state's computers?  Probably not.  What happened was
the effective federal tax rate for capital gains went up, causing investors to
rush to take capital gains before the higher rates went into effect,
temporarily inflating capital gains tax revenues at both federal and state
levels.  Because Massachusetts, as well as many other states, considered the
increased tax collections as a normal part of revenue growth, they continued to
project these gains into the future, when the higher tax rates would prevail.
These projections were wrong, and we know what happened.

This risk, however, has nothing to do with viruses, it has to do with the
environment in which econometric models are created and used.  It should be
noted that the econometric modeling industry has been shrinking for several
years--many banks have eliminated their in-house forecasting group and
subscriptions to outside forecasters are down.  The federal government has
slashed the funds available for data collection and forecasting activities.

The real risk from traditional computer-based econometric forecasting comes
from the lack of new money and talent flowing into the field that keeps the
industry from advancing technologically.  (If you don't believe me, call a
venture capitalist and tell him you'd like to start an econometric forecasting
firm.)  Conceptual bugs, such as described above, and programming errors are
problems that are likely to swamp viruses as a source of error.

Should we worry about all this.  No.  First, to the extent the such models are
used, humans are an important part of the loop.  "Fudge factors" are built into
every model and unreasonable projections are not used--the model is rerun with
new fudge factors.  No doubt, just the way programming and conceptual bugs have
been "fudged over," a virus would be, too.  Second, for most business
applications there are much better sources of economic forecasts than large
econometric models, and they are essentially free.  For example, I can safely
state that a reasonable projection of crude oil prices is that they will remain
stable over the next year, decreasing a bit over the winter and going back up
in summer.  Not only that, you can expect long-term interest rates to rise by
about 0.5% over the next two years.  Did I use a Ouija board or consult my
local oracle?  No, I just looked up the futures prices in the Wall Street
Journal.  These are market-generated predictions that are based on the
aggregate information contained by the marketplace.  True, they do not have
pinpoint accuracy, but they tend to perform quite well on average.  As many
companies and banks have concluded, who needs big, expensive econometric
models?  Maybe, just maybe, the marketplace is capable of taking care of some
risks, which in this case has nothing to do with viruses, by itself.

Ross Miller          Phone: (617) 868-1135
Boston University    Internet: [email protected]

   [The Times piece consisted of sentences randomly culled from a discursive
   discussion.  The econometric sentence was totally out of context --
   relating to integrity and correctness problems, not just worm/viruses,
   but I'm glad you picked up on it!  Thanks.  PGN]

------------------------------

Date: Mon, 14 Nov 88 09:50:52 PST
From: [email protected] (Tim Shimeall x2509)
Subject: Report on SAFECOMP '88 [long]

                Report on the IFAC Symposium on
              Safety of Computer Control Systems
                        (SAFECOMP '88)
        Safety Related Computers in an Expanding Market
        November 9-11, 1988 -- Fulda, FRG (West Germany)

This message is a set of personal observations on the Symposium on Safety of
Computer Control Systems, originated and run by the members of EWICS TC-7 with
support from IFAC and IFIPS. Prior to this meeting, SAFECOMP was held every
three years. This meeting was held two years after its predecessor (SAFECOMP
'86 in Sarlat, France) and henceforth is planned to be an annual event
(SAFECOMP '89 will be held in Vienna, Austria on September 5-7, 1989).

EWICS TC-7 (Abstracted from a talk by J. M. A. Rata):

The European Workshop on Industrial Computer Systems is a group originally
started as "Perdue Europe", a series of workshops held in Europe by Purdue
University, it is now sponsored by the European Economic Community.  Almost all
European nations have representatives in EWICS, with the exceptions being
Spain, Portugal and Greece.  The majority of members come from France, the
United Kingdom and West Germany. TC-7 is the Technical Committee on
Reliability, Safety and Security. It's an active group, with a series of
technical reports and "Pre-standard" guidelines on computer safety and
reliability published at frequent intervals. The current Chair of TC-7 is J. M.
A. Rata.

The Workshop:

Over the two and a half days of the symposium, a total of 26 presentations were
made. I'm not going to summarize all of the talks, but will give a description
of those I found most interesting.  The Symposium proceedings are available
from Pergammon Press (Edited by W. D. Ehrenberger, ISBN 0-08-036389), but there
were 6 talks given at the symposium that were not part of the proceedings - 4
of the papers were distributed on site, 1 was a report of work in progress, and
the last was Dr. Rata's description of TC-7. NOTE: the following are summaries
of my notes on the presentations I personally found most interesting.  I
profoundly regret any inaccuracies, and no criticism should be implied on the
papers omitted from this report.  [My personal comments are in square brackets
- TJS]

Dahll, G., Mainka, U. and J. Maertz*, "Tools for the Standardised
Software Safety Assessment (The SOSAT Project)"
 This was a description of an environment to aid the licensor of
 safety-related code. It starts with the final object code of the application
 to be assessed. This is disassembled and instrumented for comparison with the
 specification. There are also capabilities for analyzing the disassembled
 code with commercial tools (e.g., SPADE -- See O'Neill's talk below).  SOSAT
 itself supports 4 types of analysis: Static Analysis, including structure,
 path and data flow analysis; White-box test data generation; Symbolic
 Execution; and Real-time timing analysis.  The latter was the subject of a
 presentation by G.  Rabe at SAFECOMP '88. It's basically a sophisticated
 profiler that interfaces directly to the target hardware.  [SOSAT clearly has
 much development ahead, but there seems to be a good start in considering the
 sort of tools that the licensing examiner may find useful in evaluating
 safety-related software.]

Bergerand, J.L. and E. Pilaud, "SAGA - A Software Development
Environment for Dependability Automatic Controls"
 The French cousin of SOSAT is SAGA. This environment is focused more on the
 development of the code than the licensing. It is basically intended to
 improve the designs (by supporting design-level analyses) and to support
 reuse of software modules (with the theory that the more a module is re-used,
 the better it gets). SAGA has been used to support the development of nuclear
 power plant control code.  It doesn't seem to improve productivity, but the
 quality of the resultant code seems to be improved.

Fedra, K., "Information and Decision Support Systems for Risk
Analysis"
 This was a report on a tool to support qualitative risk assessment and
 disaster planning. It provides a graphic interface to simulate disastrous
 events and link to databases on related risks, along with geology, geography,
 weather and population demographics of the region in question.  It's geared
 for non-expert users to support industrial safety decisions.  A trial system
 has been used in the People's Republic of China.  The system also has an
 expert system to support hazard management techniques and support for safety
 analysis tools like fault-tree analysis. [This was without doubt the
 prettiest presentation of the symposium, with impressive color graphics
 showing simulations of Chernobyl, ground-water contamination and population
 evacuation. However, there were a lot of questions at the symposium about the
 fidelity of the underlying models used. The basic defense by Dr. Fedra was
 that the tool does better than the current techniques, supporting safely
 smaller safety margins. I'm not entirely sure I believe that, given the
 oft-cite propensity of non-expert users to trust computers too much.]

Taylor, J.R. "Reducing the Risks from Systems Documentation
Errors"
 The motivation for this work was a case where inaccurate documentation for
 a circuit board that was part of the firing control system resulted in a
 faulty installation that caused a gun turret on a Danish naval vessel to
 over-revolve and fire at its captain. A subsequent study found that errors in
 documentation of safety-related subsystems are quite frequent.  To reduce
 these errors, Taylor created the ANNADOC system.  The documents are
 translated by the users into a simplified technical English, which is in turn
 translated by the system into a set of finite state machines. The FSMs are
 used to simulate the system so that the documented behavior may be compared
 with the specified or actual behavior.
 [This talk was interesting because of a rarely-considered aspect of safety,
 namely the affect of the documentation. A member of the audience cited an
 additional example, a case where incorrect wiring diagrams (known to be
 incorrect by the management involved and stamped "DO NOT USE", but not
 corrected) were used in the maintenance of a nuclear reactor. The erroneous
 wiring caused a reactor trip.]

Panel Discussion: "Is probabilistic thinking reasonable in
software safety applications?"
 The Proponents of probabilistic thinking cited studies that humans are not
 deterministic in their behavior, and the desire to be able to use models
 similar to those used in hardware.  The Opponents countered by saying that we
 really don't have much basis for supporting probabilistic software
 reliability statements -- if a failure is found during software safety
 assessment, any reasonable licensing authority will require modification of
 the software to prevent that failure. The favored approach for the opponents
 seemed to by careful development and rigorous analysis of the software.  A
 poll of the audience after the discussion showed that a large majority didn't
 feel that probabilistic thinking was reasonable for software.

Bloomfield, R.E. and P.K.D. Froome, "The Assessment and Licensing
of Safety Related Software"
 [This is a presentation of an extensive tech report "Licensing Issues
 Associated with the Use of Computers in the Nuclear Industry", R.E.
 Bloomfield and W.D. Ehrenberger, Tech Report EUR11147en, Commission of the
 European Communities, Nuclear Science and Technology, 1988.  ISBN
 92-825-8005-9.  Lots of interesting summaries of the use of computers in
 various nations for nuclear and safety-related applications.  It has a stated
 purchase price of $24.50 from the Office for Official Publications of the
 European Communities, L-2985 Luxembourg]

 Certification is a formal agreement of the fitness of a system to a
 specific purpose. There is some transfer of responsibility involved in the
 certification process, morally if not legally.  Certification is normally a
 large process, with much delegation and summarization.  There may be
 pressures on the certification team to avoid articulating concerns (political
 and social pressure), to automatically accept subsystems that were generated
 in response to the certification teams comments, and to ignore a series of
 small problems that collectively destroy the certifiers confidence in the
 system.  With respect to software, there are several persistent questions:
  + What is the acceptance of risk among the populace and how do
    the certifiers acknowledge that?
  + How does risk analysis reflect value systems?
  + What are the technological limits?
  + What role should numbers play in certification? If the probabilities are
    dominated by common-mode or human-error effects, how should they be
    evaluated?
  + Should individuals and institutions need to be certified, as well as
    systems?
 Recent work (especially the UK Defense standard that will be
 published next year) focuses on formal analysis approaches:
 Z, VDM, HOL, CSS, CSP and use of temporal logics.
 [Much here that will be familiar to regular RISKS readers, but
 useful to see someone from the licensing side articulating
 these concerns.  One member in the audience raised the issue of
 how one can recognize, or measure, good software engineering practice.]

O'Neill, I.M., Summers, P.G., Clutterbuck, D.L., and P.F. Farrow
"The Formal Verification of Safety-Critical Assembly Code"

 A report of a project at Rolls-Royce to certify jet aircraft control code
 using SPADE.  The assembly code was mechanically translated into FDL for
 analysis, with annotation of proof obligations automatically inserted during
 the translation. A flow analysis of the FDL code raised queries that were
 resolved by the implementation team before the proof was conducted.  Pre and
 Post-conditions were derived from module "fact sheets" and manually inserted
 into the FDL code.  The generation of further annotations for the proof was
 done automatically, but the proof itself involved substantial human
 interaction.  Approximately 100 modules were proven.  Total correctness was
 not proven for all modules, but only about 12 involved loops at all, and the
 certification team assured themselves that the loops had a fixed limit on the
 iterations. Each module was verified individually, with no consideration of
 the inter-modular data flow.

Concluding Session (W. Ehrenberger):
 A poll of the attendees of the symposium shows concern for the following
problems:
   + Specification of Systems and tools to support this
   + Limits of understanding of the role of software in Safety
   + Man/Machine interface problems
   + Risk reducing tools (How do we qualify the results)
   + Diverging Technology (General approaches seem largely flawed)
   + Reluctance of Industry to use new techniques
   + Robust metrics and measurement (and how it relates to
     political acceptance of risk)
   + Identification of Critical system components and critical failures
[Ehrenberger noted that many of these concerns were unchanged since the first
SAFECOMP in 1979 -- a recognition that we have a long way to go.  It seemed
to me that this was a fair summation of the entire symposium.  Some
approaches look promising, but we have a long way to go to really address the
problems.  The papers were strong in recognizing the issues, but there was a
large gap between the acknowledged problems and the proposed solutions.]

*Note: Umlauts are interpreted by using the "following e"
convention. Thus, a-umlaut is written as ae, etc.

------------------------------

End of RISKS-FORUM Digest 7.78
************************