% webguide.tex Guide to LaTeX, STEP and the Web, etc
\documentclass[11pt]{article}
% a11pream.tex generic preamble
\usepackage{url}
%\usepackage{ltx2html}
\setlength{\textheight}{8.0in}
\setlength{\textwidth}{6.0in}
\setlength{\oddsidemargin}{0.25in}
\setlength{\evensidemargin}{0.25in}
\setlength{\marginparwidth}{0.6in}
\setcounter{secnumdepth}{4}
\setcounter{tocdepth}{4}
%% \usepackage{times}
\newif\ifpdf
\ifx\pdfoutput\undefined
\pdffalse
\else
\pdftrue
\fi
\ifpdf
\pdfoutput=1
\usepackage[pdftex]{graphicx}
\else
\usepackage{graphicx}
\fi
\newcommand{\file}[1]{\textsf{#1}}
\newcommand{\program}[1]{\texttt{#1}}
\newcommand{\package}[1]{\texttt{#1}}
\newcommand{\mpost}{\textsc{MetaPost}}
\newcommand{\mfont}{\textsc{metafont}}
\newcommand{\Han}{H\`{a}n Th\^{e} Th\`{a}nh}
\newcommand{\tex}{TeX}
\newcommand{\latex}{LaTeX}
\title{A Brief Guide to \latex{} Tools for Web Publishing}
\author{Peter R. Wilson\thanks{With helpful critiques by
Eitan Gurari (\texttt{
[email protected]}) and
David Wilson (\texttt{
[email protected]}).} \\
\texttt{
[email protected]}}
\date{11 March 2000}
\begin{document}
\pagestyle{headings}
\pagenumbering{roman}
\maketitle
\begin{abstract}
This document provides a brief guide to converting \latex{} documents
to forms more suitable for dissemination via the Web.
\end{abstract}
\tableofcontents
\listoffigures
\clearpage
\pagenumbering{arabic}
\section{Introduction}
Publishing on the Web has rapidly achieved significant importance,
for example, the International Organization for Standardization (ISO)
is moving towards electronic forms of International Standard documents
that are suitable for publishing on the Web, and in particular, documents
as PDF or HTML files rather than their traditional request for camera-ready
paper copy.
Documents written using \latex~\cite{LAMPORT94} tagging can be
easily converted to
PostScript, PDF and HTML, all from the single electronic source. This
guide briefly notes some of the ways that this can be accomplished.
Most of the programs and systems mentioned here are described in more
detail in~\cite{GOOSSENS99}.
I have made no attempt to design this document for Web publication. The
typographical rules for printing on paper are well founded, having been
developed over hundreds of years. Display on computer screens is a
very different matter and requires a different set of rules, most of which,
as yet, are either in a state of flux or unavailable. For LaTeXers who
are interested in this topic I suggest a look at D.~P.~Story's work on
AcroTeX (\url{
http://www.math.uakron.edu/~dpstory/acrotex.html}). Further,
for the example conversions I have used only the minimal tool options
necessary. Many of the tools have extensive capabilities which are well
documented in their accompanying user manuals; these should be consulted for
further information.
\subsection{URLs}
I have tried to provide URLs for the programs and systems mentioned
here. Most \latex-related software is available from the Comprehensive
TeX Archive Network (CTAN). There are three sites,
\url{
ftp://ctan.tug.org/tex-archive} in the USA,
\url{
ftp://ftp.tex.ac.uk/tex-archive} in the UK, and
\url{
ftp://ftp.dante.de/tex-archive} in Germany, as well as several mirror
sites. Usefully, the CTAN sites (but not necessarily a mirror site) supports
on-the-fly zipping of files and entire directories, which makes downloading
a group of files less tedious than having to get them one-by-one. Below,
I have used \url{
ftp://ctan.tug.org/tex-archive} to stand for any of
the three CTAN sites.
\subsection{Disclaimer}
Nothing that is said in this document is meant to imply any endorsement
or recommendation, either
positive or negative, concerning any systems or programs mentioned herein.
Many of the systems or programs are `free' in the sense that they are
either public domain or their licences are roughly equivalent to the GNU
Public License.
Others are either commercial or have more restricitive
licenses or may require payment. Where known, programs and systems that are
not `free' are noted.
\section{PDF}
The traditional output from a \latex{} (e.g., \file{*.tex}) file is
a `device independent' \file{*.dvi} file. The \file{*.dvi} file is then processed further to convert
it to a format suitable for printing on a particular printing device.
In the vast majority of cases the final printable format
has been PostScript, obtained by running the \file{*.dvi} file through a
program like \program{dvips}, to generate a \file{*.ps} file.
PostScript was developed by Adobe Systems. The Portable Document
Format (PDF) has since also been developed by Adobe, and seems to be
overtaking PostScript as the format of choice for printing, and especially
for display via the Web.
DVI and PDF are somewhat similar in that they both describe where
(electronic) ink is to be put on (electronic) paper. PostScript also
does this but at the same time it is a complete programming language.
This means that it is inherently more difficult, time consuming, and computer
intensive, to process PostScript than either DVI or PDF. This is probably
the reason behind the popularity of PDF on the Web.
There are now several methods of producing a PDF (e.g., \file{*.pdf})
file from \file{*.tex}. These include:
\begin{itemize}
\item Converting from PostScript to PDF;
from \file{*.ps} to \file{*.pdf}.
\item Generate PDF from the device independent file;
from \file{*.dvi} to \file{*.pdf}
\item Generate PDF directly from the \latex{} source;
from \file{*.tex} to \file{*.pdf}.
\end{itemize}
\subsection{From PostScript to PDF}
There are basically two routes to getting from PostScript to PDF. The
first of these is to use Acrobat software from Adobe Systems, which
essentially means the commercial \program{Distiller} program.
\program{Distiller} can read in a PostScript file and output a PDF file
where the visual results of printing the two files are identical. This,
or any other, PDF file can be viewed and/or printed via the charge-free
Acrobat \program{Reader} program. Note that when using \program{Reader}
the `fit to paper' option may alter the page layout, for example by changing
the height of the text block.
The second route is to use a non-Adobe converter program,
like \program{Ghostscript} which runs on nearly all operating systems and
which is obtainable from
\url{
http://www.cs.wisc.edu/~ghost}. The \program{Ghostscript} distribution
comes with a script called \program{ps2pdf} which performs the conversion.
The distribution also provides the popular \program{Ghostview} program,
which is a viewer for both PostScript and PDF files.
Another
converter program, which does have some licensing conditions that may not
be suitable for all users, is \program{PStill}; it is available from
\url{
http://www.this.net/~frank/pstill.html}.
\subsection{From DVI to PDF}
Mark Wicks' \program{dvipdfm} program
(\url{
http://odo.kettering.edu/dvipdfm}) converts a \file{*.dvi} file to a
\file{*.pdf} file. The program is used in the same manner as \program{dvips}
and provides similar capabilities.
PostScript illustrations are handled in one of two ways. Simple PostScript
generated by the \mpost{} program~\cite{HOBBY92} is included natively.
Any other PostScript file is first converted to PDF by using an external
program like \program{Ghostscript} and then inserted
into the output file. Illustrations in PDF, PNG and JPEG formats require
no external aids.
\program{dvipdfm} is written in C but there are some binaries for Linux
systems.
\subsection{From LaTeX to PDF}
The \program{pdfLaTeX} program being developed by \Han{} is a modified
version of \tex{} that generates \file{*.pdf} instead of \file{*.dvi} output
files. \program{pdfLaTeX} is distributed with many of the free \latex{}
distributions, and is also obtainable from
\url{
ftp://ftp.cstug.cz/pub/tex/local/cstug/thanh}, although it may be
better to try \url{
ftp://ctan.tug.org/tex-archive/systems/pdftex}.
Running \program{pdfLaTeX} is very similar to running \latex, but some
minor changes are required to the \file{*.tex} file. For example: \label{code:example}
\begin{verbatim}
% example.tex example latex file
\documentclass[...]{...}
\newif\ifpdf
\ifx\pdfoutput\undefined
\pdffalse
\else
\pdftrue
\fi
\ifpdf
\pdfoutput=1
% \usepackage[pdftex]{graphicx} % uncomment if using graphicx
% \usepackage[pdftex]{hyperref} % uncomment if using hyperref
\else
% \usepackage{graphicx} % uncomment if using graphicx
% \usepackage{hyperref} % uncomment if using hyperref
\fi
...
\end{verbatim}
Running \\
\texttt{latex example} \\
will produce \file{example.dvi}, while running \\
\texttt{pdflatex example} \\
will produce \file{example.pdf}. It is thus very easy to generate both
\file{*.dvi} and \file{*.pdf} from the same \latex{} source file.
\program{pdflatex} will handle graphics files in the following formats:
PDF, PNG, JPEG and TIFF, but notice that (Encapsulated) PostScript is
missing from this list. However, it can handle directly the simple
Encapsulated PostScript output by \mpost~\cite{HOBBY92}.
It does, though, expect \mpost{} files to have a \file{.mps} extension.
To include PostScript from other sources it is necessary to convert the
PostScript to PDF.
\program{pdftex}, and hence \program{pdflatex}, has some extra
primitive commands
that are not available in \tex{} itself specifically for accessing aspects
of the PDF format, for example to create hypertext links, bookmarks or
article threads. Consult the manual for details.
Independently of \program{pdflatex} the \package{hyperref} package
(\url{
ftp://ctan.tug.org/tex-archive/macros/latex/contrib/supported/hyperref})
extends the functionality of the \latex{} cross-referencing commands to
include hypertext links, and also ad hoc hypertext links to, for example,
external documents and URLs.
\subsection{Fonts}
The normal fonts used with \latex{} are the Computer Modern family
developed by Knuth using \mfont~\cite{KNUTH86b}.
All \mfont{} fonts are in the form of
bitmaps, which is unfortunate when it comes to PDF. Typically, PDF will
only use one size of each font for a document, and will scale this if
different font sizes are required. This normally works well as fonts
used with PDF are typically `Type~1' fonts (e.g., PostScript fonts)
which are designed to be scaleable. Bitmap fonts look terrible when scaled
or printed at a resolution that they were not designed for.
In other words, expect bad results if you generate a PDF file with
the original Computer Modern fonts.
Perhaps the easiest method of dealing with this is to use the most common
PostScript fonts, namely Times, Courier and Helvetica. All that is
necessary is to add \verb|\usepackage{times}| to the document's preamble.
Alternatively, if you need to use the CM fonts, perhaps because a lot
of mathematics is involved, many \latex{} distributions include
Type~1 versions of the CM fonts. If you don't have them they can be found at
\url{
ftp://ctan.tug.org/tex-archive/fonts/cm/ps-type1/bluesky} and
at \url{
ftp://ctan.tug.org/tex-archive/fonts/amsfonts/ps-type1} for the AMS fonts.
Goossens \textit{et al.} provide useful and general information
on installing and using different fonts with \latex~\cite{GOOSSENS94},
while for the fontophile, Alan Hoenig~\cite{HOENIG98} delves much
more deeply into the installation of PostScript fonts.
\tex{} doesn't care about the particular shape of any glyph, nor how it
is constructed or represented, it only cares about the space occupied by
each character (i.e., the \file{*.tfm} files).
It is the DVI processor that needs to know in detail
about the fonts in a document. So, the DVI processor has to be told to
use Type~1 CM PostScript fonts.
The following is for the \program{dvips} program.
For convenience, let \path{$TEXMF} stand for the root of the
\path{texmf} tree (e.g., \path{/usr/teTeX/texmf}).
\program{dvips} looks in the \path{$TEXMF/dvips/base/psfonts.map}
to see if it can use any PostScript fonts. This file starts off something
like:
\begin{verbatim}
bchb8r CHarterBT-Bold "TeXBase1Encoding ReEncodeFont" <8r.enc <bchb8a.pfb
..
\end{verbatim}
To get \program{dvips} to use Type~1 versions of the CM fonts, additional
lines must be added to \file{psfonts.map} giving similar information about
the fonts. The specification for CM fonts is simpler and consists of
lines like:
\begin{verbatim}
cmb10 CMB10 <cmb10.pfb
cmbsy10 CMBSY10 <cmbsy10.pfb
..
\end{verbatim}
In the version of \program{teTeX} that I use, this information is in
files \file{bsr.map}, \file{bsr-interpolated.map}, \file{cmcyr.map},
\file{hoekwater.map}, and \file{pl.map}, all in directory
\path{$TEXMF/dvips/config}.
These files can either be copied by hand to the \file{psfonts.map}
file in \path{$TEXMF/dvips/base} or in a modern \program{teTeX}
distribution (which should also have all the CM Type~1 font data) it
is easiest to do the following:
\begin{itemize}
\item In directory \path{$TEXMF/dvips/config} copy the script file
\path{updmap} to, say, \path{updmap.orig}.
\item Edit \path{updmap} to comment the line \texttt{type1\_default=false}
and uncomment the line \texttt{type1\_default=true}.
\item Run the script via \texttt{./updmap}.
\end{itemize}
Another more general method is to edit the file \file{config.ps} in
directory \path{$TEXMF/dvips/config} and at the appropriate place (which
should be marked, but in any case after the line \texttt{p psfonts.map})
add lines like:
\begin{verbatim}
p +bsr.map
p +bsr-interpolated.map
..
\end{verbatim}
Another option when using \program{dvips} which avoids all of the above,
is to call it with options, like: \\
\verb|dvips -Pamz -Pcmz -Ppdf -j0 [other options] filename| \\
and then use your prefered \file{*.ps} to \file{*.pdf} conversion process.
\subsection{MetaPost}
John Hobby's \mpost{}~\cite{HOBBY92} is a language based drawing program
based on Knuth's \mfont~\cite{KNUTH86b}. \mfont{} was principally designed
for creating fonts, and generates bitmapped output, while \mpost{} is
principally for drawing general line illustrations and its output is a
particularly simple form of Encapsulated PostScript.
\begin{figure}
\centering
\ifpdf
\includegraphics{expeg6.mps}
\else
\includegraphics{expeg.6}
\fi
\caption{Metapost illustration of an \textsc{express-g} diagram}
\label{fig:mp}
\end{figure}
This is not the place to describe \mpost, but it can generate several
output files, one for each drawing, from a single input file called, say,
\file{fred.mp}. The output files have a numeric extension correspond to the
number of the drawing. So, for example, it may generate files \file{fred.1},
\file{fred.2} and \file{fred.3}. For a document that is to be processed via
\program{LaTeX} these files can be included as is. However, for processing
through \program{pdfLaTeX}, the files must have a \file{.mps} extension; for
example \file{fred1.mps}, \file{fred2.mps} and \file{fred3.mps}.
Figure~\ref{fig:mp} is a \mpost{} illustration that is included in
this document by the code:
\begin{verbatim}
\begin{figure}
\centering
\ifpdf
\includegraphics{expeg6.mps}
\else
\includegraphics{expeg.6}
\fi
\caption{Metapost illustration of an \textsc{express-g} diagram}
\label{fig:mp}
\end{figure}
\end{verbatim}
where \file{expeg6.mps} is a copy of \file{expeg.6},
to cater for processing by either \program{LaTeX} or \program{pdfLaTeX}.
Actually, the following will also work:
\begin{verbatim}
\begin{figure}
\centering
\includegraphics{expeg6.mps}
\caption{Metapost illustration of an \textsc{express-g} diagram}
\label{fig:mp}
\end{figure}
\end{verbatim}
The figure demonstrates part of the capabilities of the
\program{expressg} \mpost{}
package (\url{
ftp://ctan.tug.org/tex-archive/graphics/metapost/contrib/macros/expressg}) for
drawing diagrams consisting of boxes, lines and annotations, such as flowcharts
or ER diagrams.
\section{HTML}
There are a number of systems that convert a \latex{} tagged document into
an HTML tagged document. These can be divided into two classes:
\begin{enumerate}
\item Systems that parse the \file{*.tex} file themselves.
\item Systems that use \tex{} as the file parser.
\end{enumerate}
There are several that do their own parsing, but only one that I know of that
uses \tex{} as the parser.
\tex{} is a macro language and the meaning of existing
commands can be changed on the fly, and also new commands can be defined on
the fly~\cite{KNUTH84a}.
As perhaps the most extreme example of this is David Carlisle's
\file{xii.tex} \tex{} code, which is obtainable as
\url{
ftp://ctan.tug.org/tex-archive/macros/plain/contrib/xii.tex}: \label{code:xii}
\begin{verbatim}
\let~\catcode~`76~`A13~`F1~`j00~`P2jdefA71F~`7113jdefPALLF
PA''FwPA;;FPAZZFLaLPA//71F71iPAHHFLPAzzFenPASSFthP;A$$FevP
A@@FfPARR717273F737271P;ADDFRgniPAWW71FPATTFvePA**FstRsamP
AGGFRruoPAqq71.72.F717271PAYY7172F727171PA??Fi*LmPA&&71jfi
Fjfi71PAVVFjbigskipRPWGAUU71727374 75,76Fjpar71727375Djifx
:76jelse&U76jfiPLAKK7172F71l7271PAXX71FVLnOSeL71SLRyadR@oL
RrhC?yLRurtKFeLPFovPgaTLtReRomL;PABB71 72,73:Fjif.73.jelse
B73:jfiXF71PU71 72,73:PWs;AMM71F71diPAJJFRdriPAQQFRsreLPAI
I71Fo71dPA!!FRgiePBt'el@ lTLqdrYmu.Q.,Ke;vz vzLqpip.Q.,tz;
;Lql.IrsZ.eap,qn.i. i.eLlMaesLdRcna,;!;h htLqm.MRasZ.ilk,%
s$;z zLqs'.ansZ.Ymi,/sx ;LYegseZRyal,@i;@ TLRlogdLrDsW,@;G
LcYlaDLbJsW,SWXJW ree @rzchLhzsW,;WERcesInW qt.'oL.Rtrul;e
doTsW,Wk;Rri@stW aHAHHFndZPpqar.tridgeLinZpe.LtYer.W,:jbye
\end{verbatim}
If you run this through \tex{} (not \latex) I'm sure you will be surprised at
the result.
There is inevitably a problem when converting from \latex{} to HTML for
a document that includes figures/illustrations or anything more than the
most simple mathematical
typesetting as, basically, HTML provides no support. Typically, mathematics
and illustrations, are converted to a picture format and then
inserted into the HTML document as graphics, usually with a very poor
appearance.
However, for mathematics the situation is starting to change
because of the advent of MathML (\url{
http://www.w3.org/TR/MathML2}).
In particular the Milestone~13 release
of Mozilla (\url{
http://www.mozilla.org/binaries.html})
is a MathML-enabled browser. Some examples, generated by \program{TeX4ht},
are available at \url{
http://www.maths.ox.ac.uk/~gartside/mozSuccess}.
All the systems generate HTML tagged documents, with the particular tagging
`style' set by the system. It is advantageous to use a converter which either by
default generates your desired style, or which can be modified in some manner
to do so.
\subsection{Self-parsing systems}
The self-parsing systems incorporate their own parsers for the \tex{}
language. In essence, this means that they `know' the meaning of common
\tex{} commands, but probably not all possible commands.
It is advantageous to use a system that can be
extended to deal with commands that were not anticipated by the author.
The only system I am familiar with in this class is Peter Wilson's
\program{ltx2x} program (\url{
ftp://ctan.tug.org/tex-archive/support/ltx2x}).
This program works by replacing known \latex{}
commands, and their arguments, by user-specified text strings~\cite{PRW96h}.
It is unable
to handle anything more than very simple mathematics and ignores any pictures.
The user-specified command texts are kept in a simple command-table file.
Within limits, new \latex{} commands and environments may be specified
within a command-table file and the command texts modified. The \program{ltx2x}
program has been used to `detex' (i.e., remove all \latex{} commands) files,
convert to HTML, and convert to SGML. It cannot convert to XML due to a
yet to be resolved technical problem in dealing with end of paragraph tags.
The program is written in C and so requires a C compiler for installation.
The system can be extended via some C programming, in which case
the \program{flex} and \program{bison} programs are also required.
There is no chance that \program{ltx2x} would ever make any sense whatsoever
of \file{xii.tex} on page~\pageref{code:xii}.
Perhaps the most venerable system is the \program{LaTeX2HTML}
system (\url{
http://www-texdev.mpce.mq.edu.au/l2h/docs/manual}
or \url{
ftp://ctan.tug.org/tex-archive/support/latex2html}), originally
by Nikos Drakos and now maintained by Ross Moore and others. This system
is written using \program{Perl} (\url{
ftp://ftp.uu.net/languages/perl}).
It also requires a database management system such as the Unix DBM or NDBM,
or the GNU GDBM system. Further, it requires \program{Ghostscript} and
the \program{netpbm} library of graphics utilities
(\url{
ftp://ftp.x.org/contrib/utilities}). To extend or change the default
conversion style requires, I think, some Perl programming. A fuller description
and examples are given in~\cite{GOOSSENS99}.
Another converter is the \program{TtH} program by Ian Hutchinson
(\url{
http://hutchinson.belmont.ma.us/tth}) which is cost-free for
non-commercial use; commercial use in this case is roughly by anyone who gets
paid while using it
(but see \url{
http://hutchinson.belmont.ma.us/tth/tth-commercial/email.html}
for the actual wording). There is also another version, \program{TeX2HTML}
(\url{
http://www.tex2html.com}) which is the `commercial GOLD version of
the freeware \program{Tth} by Ian Hutchinson'. These programs run on the usual
range of operating systems. An example of the output from \program{TtH} is
given in~\cite{GOOSSENS99}.
\subsection{TeX-based parsing system}
Eitan Gurari's \program{TeX4ht} system appears to be unique in that
it uses \tex{} as the parser for the \latex{} document and instead effectively
takes the \file{*.dvi} file as its starting point for conversion to HTML.
That is, it does not have to understand \tex{} code and can, in fact, convert
David Carlise's \file{xii.tex} (page~\pageref{code:xii}) to HTML.
The system is available from
\url{
http://www.cis.ohio-state.edu/~gurari/TeX4ht/mn.html}. It consists of
two C programs, one package file, and a set of \file{*.4ht} configuration
files, one for each of the typical LaTeX{} classes and packages. It also
requires ImageMagick
(\url{
http://www.wizards.dupont.com/cristy/www/archives.html}) for handling
illustrations and non-simple mathematics.
Simply speaking, the system
is extended by writing new \file{*.4ht} file(s) and its output modified
by writing simple \file{*.cfg} file(s) that override the \file{*.4ht} file(s).
At the moment, the best and most detailed description is given
in~\cite{GOOSSENS99}.
By default, \program{TeX4ht} can generate a non-tagged file or a file
tagged with either HTML3.2 or HTML4.0. By writing appropriate \file{*.cfg}
files it can be made to generate XML tagged files. The system comes with a
script called \program{htlatex} which controls the conversion process from
\latex; there is also the \program{httex} script for converting a \tex{} document.
For instance, to convert the earlier example LaTeX{} file,
\file{example.tex} (page~\pageref{code:example}),
to an HTML4.0 tagged document, it is enough to run: \\
\texttt{htlatex example} \\
which will then output \file{example.html}. Similarly, to convert
\file{xii.tex} just run: \\
\texttt{httex xii}.
\section{Examples}
Hopefully, you should find several versions of this document, all of
which have been generated from a single source file. These are:
\begin{itemize}
\item \file{webguide.tex} --- the \latex{} source.
\item \file{webguide.ps} --- A PostScript version from running
\program{latex} and \program{dvips} on \file{webguide.tex}.
\item \file{webguide.pdf} --- A PDF version from runnning
\program{pdflatex} on \file{webguide.tex}.
\item \file{webguide.html} --- A HTML4.0 version from running
\program{htlatex} on \file{webguide.tex}.
\end{itemize}
The HTML version uses the GIF file
\file{webguide0x.gif} for the illustration; the file is automatically
generated by \program{tex4ht} using the ImageMagick \program{convert}
program. The quality of the picture in the viewed document depends on
the particular viewer; different versions of Netscape, for example,
may display and print the diagram with very different rendering qualities.
I asked TeX4ht to generate this particular figure at 180dpi instead of
the default 110dpi to improve the quality.
\bibliographystyle{alpha}
%%\bibliography{refs,prw}
\begin{thebibliography}{GMS94}
\bibitem[GMS94]{GOOSSENS94}
Michel Goossens, Frank Mittelbach, and Alexander Samarin.
\newblock {\em The LaTeX Companion}.
\newblock Addison-Wesley Publishing Company, 1994.
\bibitem[GR99]{GOOSSENS99}
Michel Goossens and Sebastion Rahtz.
\newblock {\em The LaTeX Web Companion -- Integrating TeX, HTML, and XML}.
\newblock Addison-Wesley Publishing Company, 1999.
\newblock (with Eitan Gurari, Ross Moore, and Robert Sutor).
\bibitem[Hob92]{HOBBY92}
John Hobby.
\newblock {\em `{A User's Manual for MetaPost}'}.
\newblock Technical Report 162, AT\&T Bell Laboratories, Murray Hill, NJ, 1992.
\bibitem[Hoe98]{HOENIG98}
Alan Hoenig.
\newblock {\em TeX Unbound -- LaTeX and TeX Strategies for Fonts, Graphics, \&
More}.
\newblock Oxford University Press, 1998.
\bibitem[Knu84]{KNUTH84a}
Donald~E. Knuth.
\newblock {\em The TeXbook}.
\newblock Addison-Wesley Publishing Company, 1984.
\bibitem[Knu86]{KNUTH86b}
Donald~E. Knuth.
\newblock {\em The METAFONTbook}.
\newblock Addison-Wesley Publishing Company, 1986.
\bibitem[Lam94]{LAMPORT94}
Leslie Lamport.
\newblock {\em LaTeX: A Document Preparation System}.
\newblock Addison-Wesley Publishing Company, second edition, 1994.
\bibitem[Wil96]{PRW96h}
Peter~R. Wilson.
\newblock {\em {ltx2x: A LaTeX to X Auto-tagger}}.
\newblock NIST Report NISTIR, June 1996.
\end{thebibliography}
\end{document}