EuroTeX 2001

Abstracts

Contents

Re-introduction of Type 3 fonts into the TeX world

Wlodzimierz Bzyl

Type 3 characters look like ordinary characters, except that they can image themselves in COLOR. Unfortunately colored characters are rendered each time they are printed, which slows printing. This is so, because nowadays printing software does not know how to cache color and grey information. This may change in the future, I hope.

The most important thing to understand about fonts is that they are computer programs contained in one source file. The main part of every font is made up of procedures which define the shapes of the characters. The language used for coding fonts varies. In the case of Type 3 fonts it is PostScript.

Because in a long text any particular character could appear thousands of times, for efficiency reasons, it is important that the language provides facilities to reuse the results of already executed code. For black characters PostScript provides the `setcachedevice' function, which allows caching the results of executing procedures that paint the characters. Current versions of PostScript interpreters do not provide any support for caching of color or grey characters.

Maybe this extra added inefficiency made Type 3 fonts such a rare species. But, even with today's technology, there are niches where these fonts could make printed texts more readable, personalized and beautiful. These niches include:

  • symbols (math, computers, linguistic)
  • logos
  • headers
  • dingbats
  • initial caps
  • emotional (smileys)
  • ornaments
  • tilings

The purpose of my presentation is an attempt to re-introduce Type 3 fonts into the TeX world. To this end, I created under CVS (Concurrent Version System) a repository devoted to Type 3 fonts. The repository:

  1. is organized as a single tree which could be easily planted into an existing Tex Directory Structure,
  2. provides several Type 3 specimens of different origins, namely: PostScript (native), MetaPost and MetaFont with examples to be conveniently instantiated and inspected on any UNIX machine,
  3. contains an exact specimen description and instructions how to raise Type 3 fonts at home.

The Euromath System - a structure XML editor and browser

J. Chlebíková, J. Gurican, M. Nagy, I. Odrobina

Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia

The Euromath System is an XML WYSIWYG structure editor and browser with the possibility of TeX input/output. It was developed within the Euromath Project and funded through the SCIENCE programme of the European Commission. Originally, the core of the Euromath System was based on the commercial SGML structure editor Grif. At present, the Euromath System is in the final stage of re-implementation - based on the public domain structure editor Thot.

During the re-implementation process several principal differences between the Thot basic features and the basic purposes of the Euromath System had to be resolved:

  • There was no direct support of XML in Thot. The Euromath System provides a tool that is capable of introducing an arbitrary XML DTD in Thot - to allow structure editing according any DTD.
  • The problem of opening and saving XML documents, the support of Unicode.
  • Thot is an authoring system, but the Euromath System is also a www-browser.

The Euromath applications were added to the Thot/Euromath structure editor to extend the possibilities of the Euromath System as a structure editor:

  • Personal File System with a new interface and communication with the ZentrallBlatt Math database; automatic translation into the `euromath_article.DTD'.
  • Translation of (La)TeX files to the Euromath standard DTD.
  • LaTeX output.
  • The possibility to include LaTeX math expressions during editing with macro support.
  • ZentrallBlatt Math database output in BiBTeX format.

Instant Preview and the TeX daemon

Jonathan Fine

Cambridge, UK
Email: jfine@activetex.org
Web: www.activetex.org

Instant Preview is a new package, for use with Emacs and xdvi, that allows the user to preview instantly the file being edited. At normal typing speed, and on a 200MHz machine, it refreshes the preview screen with every keystroke.

Instant Preview uses a new program, dvichop, that allows TeX to process small files over 20 times quicker than usual. It avoids the overhead of starting TeX. This combination of TeX and dvichop is the TeX daemon.

One instance of the TeX daemon can serve many programs. It can make TeX available as a callable function. It can be used as the formatting engine of a WYSIWYG editor.

This talk will demonstrate Instant Preview, describe its implementation, discuss its use with LaTeX, sketch the architecture of a WYSIWYG TeX, and call for volunteers to take the project forward.

Instant Preview at present is known to run only under GNU/Linux, and is released under the GPL. It is available at: www.activetex.org

Directions for the TeXLive Software

F. Popineau

I intend to talk about a few issues with the current versions of the software on the TeXLive cdrom. Some of them can or will be fixed with time and work. Some other ones are still open for discussion.

The first part of the talk will address the setup program. The past experiments with the win32 installer have revealed that the problem was harder than expected. The new description files setup for TeXLive 6 by Sebastian Rahtz will allow a more effective way to use the setup program and the following features will be available (at least for the win32 version):

  • free texlive from its cdrom physical medium and make it available over the internet from reference sites,
  • update one or all of the installed packages,
  • remove some installed package,
  • keep track of dependencies between packages, i.e., update also dependents when you update some package, or remove dependents when you remove some package,
  • browse the current list of available packages.

Still, there are questions about the setup program and these description files:

  • How about a unix version of the setup program?
  • Why do we need to maintain a second set of XML files, since we already have the Catalogue on CTAN?
  • Do we need two separate programs to install the software and administrate it?

The second part of the talk will address a set of possible extensions to the web2c/kpathsea pair (and a call for code contributions!):

  • Kpathsea is slow to start (a bit more under win32 than under unix), and now that programs more and more frequently use the \write18 facility, this is time expensive. The bottleneck is the hash table creation for all the entries in the ls-R files, which should be built once for all, at least over the same run.
  • Kpathsea could be extended towards the internet so that remote texmf trees can be searched. This is easy to do in a naive way, but thinking about it opens up many options.
  • Under Win32, tex compiles as a dll, that means it can be loaded by some program and its functions called dynamically. However, tex.dll has very few entry points, requires a console or a file to print the errors in batchmode, and this ouput has to be parsed. Its console mode program roots should be cleared a bit more. One possible application of a true background tex.dll would be to interface it with XEmacs in a way that would not require XEmacs to talk with an external program but directly to the tex engine.

Usage of MathML for paper and web publishing

Tobias Burnus

The Mathematical Meta Language (MathML) of the World Wide Web Consortium (W3C) based on XML has gained more support in the last months. Looking at the W3C's list of software which supports MathML one sees that the number of applications which can produce MathML is rather long, but the list of applications supporting typesetting of MathML is rather short.

I will concentrate on those points:

  1. Using MathML to write real formulas. I started using it for writing my formulas as a physicist, but I will also use some more complicated examples from the field of physics and mathematics trying to reach the limits of the language.
  2. Typesetting MathML on paper in high quality. Writing MathML alone doesn't help if you cannot print it. I will look at the quality of output and alternative representations using ConTeXt.
  3. Typesetting on the Web. Except for the fact that there are some applications which can produce MathML and not TeX output, the real use for MathML is the direct and fast representation on representation on the Web. For that I will look at the MathML features of Mozilla.

Font Specials

Boguslaw Jackowski

BOP s.c., Piastowska 70, 80-363 Gdansk, Poland

Krzysztof Leszczynski

Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716 Warszawa, Poland

We present an application of a special pseudo-font, cmdfont, to include an additional format-independent markup for augmenting, in a sense, \special instructions in several document types, especially in METAPOST and TeX ones. The examples show that the technique may prove to be exceptionally useful when applied to METAPOST figures.

METATYPE1: a METAPOST-based engine for generating Type 1 fonts

Boguslaw Jackowski, Janusz M. Nowacki and Piotr Strzelczyk

A package that makes use of METAPOST, but also AWK, perl and T1utils, for generating PostScript Type 1 fonts is described. The package alows also for converting Type 1 fonts to METAPOST sources. Some general remarks concerning the creation of other font formats are included.

Fonts of the Future -- the Future of Fonts

Marek Rycko and Boguslaw Jackowski

A typical contemporary typesetting system is based on a certain model of the typesetting process that divides various tasks and knowledge between the system itself and several sets of files called fonts.

Many existing font types and structures are used for this purpose. The font types includes TeX fonts, PostScript fonts, TrueType fonts, OpenType fonts and many other. Some of them may be used in the current implementations of TeX and its clones.

The existing model of typesetting, including the existing model of fonts, evolved from the old typesetting methods. We claim that this model is highly insufficient for lots of both popular and specific typesetting tasks. In our opinion, if a new typesetting system is to be designed and if the system is to be revolutionary, it must include a new concept of fonts.

We propose some of the desired features of the new fonts. We show examples of applications, where such fonts could be extremely advantageous.

Pattern Generation Revisited

David Antos and Petr Sojka

The technique of covering and inhibiting patterns invented and implemented by Liang in the program PATGEN is highly effective and powerful. PATGEN, being nearly twenty years old, doesn't suit today's needs:

  • due to limitation to 8-bit ASCII,
  • it is nearly impossible to make changes, as the program is highly optimized (like TeX),
  • it uses static data structures,
  • reuse of the pattern technique and packed trie data structure for problems other than hyphenation (ligature breaking, spelling) is cumbersome.

Those reasons made us decide to reimplement PATGEN from scratch in the object-oriented manner (like NTS--New Typesetting System reimplementation of TeX) and to create the PATtern LIBrary PATLIB and the hyphenation pattern generator based on it.

This approach allows the code to be used in many applications of pattern recognition, which include various natural language processing, optical character recognition, and many other applications.

The Implementation of MlBibTeX

Jean-Michel HUFFLEN

Laboratory of Computer Science
University of Franche-Comte
25000 BESANCON --- FRANCE

"MlBibTeX" stands for "Multilingual BibTeX". This is a re-implementation of BibTeX, including multilingual features. For example, an entry may look as follows:

@BOOK{robeson1968i,
   AUTHOR = {Kenneth Robeson},
   TITLE = {The Flaming Falcons},
   ...
   NOTE = {[Premi\`{e}re \'{e}dition en juin 1939 dans] * french
[Originally published June 1939 in] * english
\emph{Doc Savage Magazine}},
   ...
   LANGUAGE = english}

The LANGUAGE field is used when users wish the language of each bibliographical reference to be that of the reference. Otherwise, the language of the document can be chosen as the language for the whole of the bibliography.

In any case, switches w.r.t. the language used are possible and can be specified by:

   [...] * language-name

If we consider the entry "robeson1968i" as a reference for a document written in English, the NOTE field is equivalent to:

    NOTE = {Originally published June 1939 in
\emph{Doc Savage Magazine}},

If it is processed by MlBibTeX for a document written in French, it is equivalent to:

    NOTE = {Premi\`{e}re \'{e}dition en juin 1939 dans
\emph{Doc Savage Magazine}},

Other multilingual features are included: for example, using the accurate file for hyphenation, and processing the bibliographical keywords ("and", "no", ...) in an appropriate way, ... As mentioned in our example, MlBibTeX can process the French and English languages, but is not limited to them. It can be used in cooperation with the multilingual package "babel" as well as other "specialised" packages such as "french" or "german".

The MlBibTeX project started in October 2000 and the first public version will be available Summer 2001. The features of MlBibTeX were informally presented at the French LaTeX conference GUTenberg 2001. At EuroTeX 2001, we intend to give a more formal presentation, including a very precise description of the grammar we use. So, this presentation should be useful for end-users of this "new BibTeX" as well as for people developing tools based on BibTeX (for example, translators from BibTeX to HTML).

POLIGRAF: from TeX to print

Janusz M. Nowacki

The Poligraf package is a format-independent support package for TeX, facilitating the preparation of TeX documents for printing in professional printing houses. Poligraf surrounds a page with graphical elements such as crop marks, registration marks, color steps, color bars etc. The package can also be used for colour separation (as an interface for the package CMYK-HAX). The Poligraf package was first presented at the BachoTeX'96 conference. The new version was written from scratch.

The bibliographic module for context

Taco Hoekwater

Elvenkind B.V.

The bibliographic module for ConTeXt (m-bib) provides an interface between ConTeXt and BibTeX, an simultaneously extends ConTeXt functionality in this area. During EuroTeX, Hans Hagen and myself release version 1.0 of this module. The module uses a database format in TeX macros that is independant of BibTeX. All formatting specifications are given in TeX macros instead of the ideosyncratic BibTeX language. The talk will mostly focus on the database structure that is used by the module, and the associated macros.

TeXlib: a TeX reimplementation in library form

Giuseppe Bilotta

TeXlib: a TeX reimplementation in library form will be a presentation of the TeXlib project. I will describe the reasons behind the creation of the project (interactive TeX-ing), its main aims (breaking through the sequential approach of TeX to document sources) and secondary aims (convergence of TeX extensions, librarization of TeX friends), the basic ideas behind the library structure (resource stack building, modularization of input, processing/typesetting, and output, possible extensibility), and the problems/pitfalls I already know we will meet along the way (Input/Output/Error management, backward compatibility).

Literate Programming: Not Just Another Pretty Face

Michael Guravage

The fame of TeX, Metafont, and friends extends much further than does that of Literate Programming, the style used to create them. Knuth intended TeX, "for the creation of beautiful books", but does that imply that Literate Programming produces beautiful programs? The aesthetics of a literate program are instantly recognizable, but what is behind its pretty face? Are there compelling software engineering reasons to choose to use Literate Programming? To help answer this question, we will describe current work using graph theory to visualize and compare the structures of literate programs to the call graphs of traditional programs. In this light, are there reasons, technical or historical, why Literate Programming should, or should not, be considered for NTS?

TEX and/or XML: good, bad and/or ugly

Hans Hagen

PRAGMA ADE
8061 GH Hasselt
The Netherlands
E-mail: pragma@wxs.nl
URL: www.pragma-ade.com

As a typesetting engine, TeX can work pretty well with structured input. One can build interfaces that are reasonably well to work with and code in. XML on the other hand is purely meant for coding, and the more rigorous scheme prevents errors and makes reuse easy. Contrary to TeX, XML is not equivalent to typesetting, although there are tools (and methods) to easily convert the code into other stuctured code (like HTML) that then can be handled by rendering engines. Should we abandon coding in TeX in favor of XML? Should we abandon typesetting using TeX in favor of real time rendering of relatively simple layout designs? Who are the good and bad guys in that world? And even more importantly: to what extent will document design (and style design) really change?

TEX Top Publishing: an overview

Hans Hagen

PRAGMA ADE
8061 GH Hasselt
The Netherlands
E-mail: pragma@wxs.nl
URL: www.pragma-ade.com

TeX is used for producing a broad range of documents: articles, journals, books, and anything you can think of. When TeX came around, it was no big deal to beat most of those day's typesetting programs. But how well does TeX compete today with mainstream Desk Top Publishing programs?

What directions will publishing take and what role can TeX play in the field of typesetting? What are today's publishing demands, what are the strong and what are the weak points of good old TeX, and what can and should we expect from the successors of TeX?

Natural TeX notation in mathematics

Michal Marvan

Current TeX/LaTeX notation for math expressions encodes presentation, while mathematicians generally wish to communicate the content. We would like to introduce Nath, a LaTeX 2.09/2e style to implement the natural math notation. The Nath notation once again exploits the key principle of LaTeX typography – separation of presentation and content. It is (intended to be)

  • context-independent (allow for transition from displaystyle to textstyle by mere replacement of $$ with $),
  • producing traditional math typography,
  • basically backward compatibile with TeX/LaTeX/AmSTeX.

In Nath, for instance, \frac denotes a fraction as such, and the style selects the appropriate form: built up, case, or slash. When the slash form is selected (typically in in-line formulas or sub- and superscripts), parentheses are made around the numerator, denominator, or the whole fraction whenever required by rules of precedence. In particular, Nath prevents built-up non-numeric fractions from occurring in in-line formulas. Another algorithm decides on the type of numeric fractions (case or built up) in displayed formulas.

Presentation commands are kept at the necessary minimum. Therefore, displaystyle delimiters automatically adjust their size and position to the material enclosed and, needless to say, do so across line breaks (thus rendering \left and \right nearly obsolete). Moreover, subtle parts such as sub- and superscripts do not affect the size of the delimiters.

The price of natural notation is that TeX works harder and spends more time on formatting the math material. However, savings in human work appear to be substantial. Benefits should be even more obvious in the context of lay publishing.

What does NTS offer the TeX community?

Simon Pepping

Elsevier Science
Amsterdam
Email: s.pepping@elsevier.nl

After many years of dreaming and talking about it, and three years of development NTS is there. Now what?

Until recently NTS was discussed and tested by a small group of people. I was not a member of that group and got my first copy of NTS in March 2001. In this talk I will look at NTS from the perspective of an outsider. The NTS group has given the TeX community NTS, now what is the TeX community at large going to do with it?

Let it rust, because it does what TeX does and does it more slowly? Run it, because we believe in the future, even though it is so resource hungry? Or open it, dissect it, study, discuss, admire and curse its design, find new ways to use and misuse it, to tweak it, to extend it, and to make it do all sorts of tasks that we happen to have a need for?

NTS is not TeX. It is not written to perform a huge task in a tiny computer, skillfully folded over and in to fit into an unbelievably tiny memory space. It is not written in a programming language that has gone out of use. NTS is a modern program, written according to current rules of OO technology and modularization, with extensibility in mind, more mindful of the structure of its design than of the amount of resources that design takes up. NTS invites us to work on it, to make it evolve, to let it grow so as to be capable of an ever growing range of tasks, to make it offer its services in modern architectures of interoperability.

At the conference I may be able to show some results with NTS, which at the time of writing do not yet exist. And anyhow, I will try to convince the audience that NTS is a typesetting engine that offers the quality of TeX in a design that allows it to play a role in modern software systems.

The TeX community have spent a lot of time, thoughts, heated discussions and money to get NTS. Now that we have it, let us not miss the opportunities that it offers.

DCpic, a (yet another) commutative diagram package based on the PiCTeX macro package

Pedro Quaresma

Departamento de Matemática
Faculdade de Ciências e Tecnologia
Universidade de Coimbra
P-3000 COIMBRA, PORTUGAL
email: pedro@mat.uc.pt
url: www.mat.uc.pt/~pedro/

Commutative diagrams (Diagramas Comutativos, in Portuguese) are a kind of labeled graphs that are widely used in category theory.

The (not so) bad
Most of the packages provide a (not so) simple user interface, consisting of a certain matrix notation. Such a specification may become much too obscure for a (not so) complex diagram.
The good
In the DCpic package the user interface consists of a graph-like syntax, with objects and morphisms (arrows) laid down in a cartesian coordinate system.
The not so good
The use of the PiCTeX package provides a powerful graphical base, but also puts a (not so) heavy burden on the compiler, which may slow down the compilation.

Math typesetting in TeX: The good, the bad, the ugly

Ulrik Vieth

Vaihinger Straße 69
D-70567 Stuttgart, Germany
netaddress: ulrik.vieth@tesionmail.de

It is well known that TeX is very good at typesetting math. TeX users have grown so accustomed to this that it is sometimes taken for granted, yet it is an important feature that deserves to be recognized as such. Math typesetting was one of the main reasons why TeX was developed in the first place and why TeX has become succesful and widely adopted in the science communities.

While the quality of math typeset with TeX is probably still unmatched, just having a system capable of producing very high quality output doesn't mean that all is well. When looking at TeX's math typesetting engine from the implementor's point of view, TeX does show its age. Apart from various limitations and maybe a few missing features, there are all sorts of very peculiar features and assumptions built into the system and the accompanying math fonts.

In this paper the problems and shortcomings of TeX's math typesetting engine and math fonts will be discussed and analyzed in detail, focussing on the technical aspects of math fonts such as the glyph metrics and font dimensions while leaving aside the topic of glyph sets and font encodings.

Use of TeX plugin technology for displaying of real-time weather and geographic information

S. Austina, D. Menshikovb, and M. Vulisa

aDept. of CSC, CCNY, NY, USA
bMicroPress, Inc, USA

In this article we show how by means of the GeX plugin technology one can process and display geographic information including real-time weather data as part of a TeX->PDF compilation.

The plugin technology [introduced at TUG2000] functions under the PDF backend of the VTeX compiler; it allows the user to enhance the TeX-integrated PostScript converter (GeX) with user-defined language extensions. Plugins can be used for plotting or retrieving specialized data; earlier plugin examples were used for business or scientific plots.

The Tiger plugin is a new experimental plugin which can retrieve static geographic data (the Tiger database, for instance), as well as the real time weather data and plot them together within the context of a TeX document compilation; it can be used, for example, to supplement static TeX documents (papers, books) with maps, as well as (on a server environment) produce real-time weather maps.

ASCII-Cyrillic and its converter email-ru.tex

Laurent Siebenmann

E-mail: lcs@topo.math.u-psud.fr, lcs@math.polytechnique.fr

A new faithful ASCII representation for Russian, called ASCII-Cyrillic, is presented, one which permits accurate typing and reading of Russian when no Russian keyboard or font is available -- as is often the case outside of Russia.

Using pdfTeX in a PDF-based imposition tool

Martin Schröder

Crüsemannallee 3
28213 Bremen
Germany
Net address: martin@oneiros.de
URL: www.oneiros.de

pdfTeX has been used successfully to build an industrial-strength PDF-based imposition tool. This paper/talk describes the pitfalls we encountered and the lessons learned.

From database to presentation via XML, XSLT and ConTeXt

Berend de Boer

Peperstraat 29
5311 CS Gameren
The Netherlands
Net address: berend@pobox.com
URL: www.pobox.com/~berend

Much data exists only in databases. Examples are contacts, addresses or books. Every once in a while this data much be presented to humans. What is more suited for this task then a batch typesetting engine, and what more pleasing to the eye then TeX?

In this presentation I show you how you can extract information from XML or relational databases and typeset them with ConTeXt. I attempt to show every conceivable method such as: using SQL queries and XSL to generate suitable XML which is typeset directly by ConTeXt's XML typesetting subsystem. And using SQL and XSL to directly generate ConTeXt code.

Demonstrated are the batch sql query tools from InterBase and DB/2. For XSL processing Xalan is used. All these tools exist on popular Unix platforms and on Microsoft Windows platforms.

A WYSIWYG TeX prototype: texlite

Igor Strokov

Novosibirsk, Russia

As TeX was not designed for real-time editing of typeset documents, it faces a growing challenge from modern WYSIWYG editors. Some TeX implementations partially solve this problem by means of shell programs, which provide logical formatting of an input text (Scientific Word, lyx) or an instant preview of a typeset document (e.g., textures; see also the article by Jonathan Fine in these proceedings). However, if one aims to edit a document in its final (typeset) form in real time regardless of the document size, used format and macros, one cannot avoid interfering with TeX internals. In the presentation I will explain which interference is required and why, and will show how it works in the existing prototype program called texlite.

Texlite is based on Knuth's canonical "TeX the program" and has 3 new features:

  • It may start or continue to compile a document from an arbitrary page.
  • The ability to rapidly reformat a selected paragraph (usually the paragraph which is edited now).
  • It remembers the origin of visible elements from the source text.

Besides, texlite has a visual shell which displays a ready (typeset) document and its source text in separate windows. One can edit a document in either form, obtaining the other view automatically. Thus texlite provides a real WYSIWYG mode of editing arbitrary (La)TeX documents. Although this mode has several evident advantages (including the implementation of local and Internet links in a document in a HTML manner -- to be shown), the use of the source text is sometimes preferable and should remain in any visual program based on TeX.

`Typography' and production of manuscripts and incunabula

Paul Wackers

When we try to produce well-structured books that are also pleasing to the eye, we stand in a tradition of more than twenty centuries. The appearance of modern western books, however, slowly developed during the middle ages and got its definite form in the decades round 1500, the first phase of book printing in Europe. In my paper an outline of this development regarding script and lay out will be presented. In these aspects incunabula are completely comparable to manuscripts. Regarding production and the presentation of the book as a whole, however, the first phase of printing books shows major changes. The title page was invented and the production process slowly became completely mechanised.

These changes in lay out and production will be illuminated by means of a series of images, from a fifth century Virgil manuscript to sixteenth century fable collections. Special attention will be given to Gutenberg, the first printer in European history.

TeX in Teaching

Michael Moortgat, Richard Moot, Dick Oehrle

A well-known slogan in language technology is 'parsing-as-deduction': syntax and meaning analysis of a text takes the form of a mathematical proof. Developers of language technology (and students of computational linguistics) want to visualize these mathematical objects in a variety of formats.

We discuss a language engineering environment for computational grammars. The kernel is a theorem prover, implemented in the logic-programming language Prolog. The kernel produces TeX source code for its internal computations. The front-end displays these in a number of user-defined typeset formats. Local interaction with the kernel is via a tcl/tk GUI. Alternatively, one can call the kernel remotely from dynamic PDF documents, using the form features of the hyperref package.

We report on our experiences with this system in the computational linguistics programs of Utrecht University.

Conversion of TeX fonts into Type1 format

Péter Szabó

Email: pts@fazekas.hu

The most common problems with PDF files produced by TeX (either by pdfTeX or normal TeX → dvips → ps-to-pdf-converter) is that Acrobat Reader renders most fonts slowly and unreadably ugly on screen. This is because most TeX fonts can be included into PDF files only as high resolution raster (bitmap) images, and Acrobat Reader shows such images slowly and inaccurately. This has been one of the famous Ugly Bits of TeX for years.

The solution to the problem is to include TeX fonts as vector outlines. Unfortunately most TeX fonts are available only in METAFONT format, and currently no good converter to vector outline font formats supported in PDF (such as Type1 or TrueType) exists.

TeXtrace is a free (GPL-ed) program I've written recently to convert any TeX font into a Type1 .pfb outline font file immediately usable by pdfTeX etc. TeXtrace renders the font in high resolution, and calls the program AutoTrace to convert each bitmap to vector outline. I have managed to convert more than 500 fonts into Type1 format, including all the long-awaited EC fonts.

In my demonstration I will analyze the METAFONT-to-vector-outline conversion problem and font format compatibility issues in great detail, describe how TeXtrace works, and compare it to other existing solutions respecting font quality, font size, amount of human effort needed etc.

A Tour around the NTS implementation

Karel Skoupy

ETH Zurich
Email: skoupy@inf.ethz.ch

NTS is a modular object-oriented reimplementation of TeX. It is written in Java and is meant to be extended with new functionality and improvements. NTS in not simpler than original TeX (because it does exactly the same job) but it is better structured. The dependencies between parts of NTS are expressed by clear interfaces. It should be much easier to make changes and extensions with an understanding of only a specific part of the system. The problem is that NTS contains hundreds of classes and for a potential extender it is difficult to find where to start.

I will try to show the path by which the characters and constructions present in the input file pass through the machinery and get typeset. Along the way we will visit the key classes and concepts of NTS, explain the differences from original TeX and propose the good points where to dig into the system.

The presentation will not be too technical but a rather intuitive illustration of the principles that could be of interest for everyone who wants to know how NTS and TeX work.