STM publishing: tools, technologies and change A WordPress site for STM Publishing


LuaTeX token library: simple example of scanners to mix TeX and MetaPost

Posted by Graham Douglas


In this short article I share the results of using LuaTeX’s token library as one way to intermix TeX and MetaPost code. I used LuaTeX 1.0.1 which I compiled from the source code in the experimental branch of the LuaTeX SVN repository but, I believe, it should also work with LuaTeX 1.0 (may also work with some earlier versions too). In addition, I used luamplib version 2016/03/31 v2.11.3 (I downloaded my copy from github). Note that I do not have a “standard” TeX installation—I prefer to build and maintain my own custom setup (very small and compact).

The LuaTeX token library

I will not try to explain the underlying technical details but simply point you to read this article by Hans Hagen and the relevant section in the LuaTeX Reference Manual (section 9.6: The token library). Here, I’ll just provide an example of using the token library—it is not a sophisticated or “clever” example but one simply designed to demonstrate the idea.


The goal is to have TeX macros that contain a mixture of TeX and MetaPost code and find a way to expand those macros into a string of MetaPost code which can be passed to LuaTeX’s in-built MetaPost interpreter: mplib.  Suppose, just by way of a simple example, that we defined the following simple TeX macros:

\def\pp#1#2{pickup pencircle xscaled #1mm yscaled #2mm;\space}
\def\mpef{endfig; end}
\def\draw#1{draw \fc scaled #1;\space}

and we’d like to use them to write TeX code that can build MetaPost graphics. Note that the definition of \draw also contains the command \fc

The scanner

The following code uses LuaTeX’s token library function scan_string() to generate a string from a series of incoming TeX macros—by expanding them. It stores the resulting string into a toks register so that we can later use and obtain the result.

local p = token.scan_string()

We could use this macro like this:

 \scanit{...text and TeX macro in braces...}

but instead we’ll add just a little more functionality. Suppose we further define another TeX macro

\def\codetest{\mpbf{1} \pp{0.3}{0.75} \draw{12}\mpef }

which contains a sequence of commands that, once expanded, will generate MetaPost program to produce our graphic. To expand our TeX macro (\codetest) we can do the following:

\scanit{\codetest} and the output is \the\mpcode

Here, the braces "{" and "}" are needed (I think...) in order to make token.scan_string() work correctly (I maybe wrong about that so please run your own tests). Anyway, the \codetest macro is expanded and (due to \scanit) the resulting MetaPost code is stored in the toks register called mpcode. We can see what the toks register contains simply by typesetting the result: doing \the\mpcode—note you may get strange typesetting results, or an error, if your MetaPost contains characters with catcode oddities. You can use \directlua{print(tex.toks["mpcode"])} to dump the content of the mpcode toks register to the console (rather than typesetting it). What I see in my tests is that the mpcode toks register contains the following fully expanded MetaPost code:

beginfig(1); pickup pencircle xscaled 0.3mm yscaled 0.75mm; draw fullcircle scaled 12; endfig; end

And this is now ready for passing to MetaPost (via mplib). But how? Well, one option is to use the package luamplib. If you are using LaTeX (i.e., luaLaTeX format) then you can use the following environment provided by luamplib:

...your MetaPost code goes here

However, our MetaPost code is currently contained in the mpcode toks register. So here’s my (rather ugly hack) that uses \expandafter to get the text out of mpcode and sandwiched between \begin{mplibcode} and \end{mplibcode}. I am sure that real TeX programmers can program something far more elegant!


Consequently, we  just need to issue the command \mcodex to draw our MetaPost graphic. I hope this is an interesting suggestion/solution that others might find useful.

Filed under: Uncategorized No Comments

Remote working: An opportunity to address publishing’s diversity problem?

Posted by Graham Douglas

Many areas of publishing are anxious to innovate and, quite rightly, want to increase the diversity of their workforce. Innovation usually focuses on solutions to improve publishing’s outputs—the revenue-generating products and services which pay the bills. But the tendrils of innovation should also reach out to address the needs of that most crucial of inputs—the people who make things happen. One way to “walk the walk” and enable diversity—not just hand-wring, produce endless virtue-signalling tweets and give nice talks—is to innovate your recruitment and working practices by offering many more remote-working opportunities. By doing so you will open-up a route to employment which immediately reaches out into every corner of the country and into every community. Why fish from a pool of talent when you can trawl the deeper ocean? As someone who spent 3 years remote-working for a US publisher I can say, from real experience, that it can work—and be very effective.

It does not require penetrating insight to realize that a great deal of UK publishing activity is clustered around a small number of key locations. To some, it may sound heretical to suggest that not everyone actually wants, or is able, to work in any of the handful of locations into which so much of our publishing activity has coalesced. Family commitments, disability or just the sheer expense means that many highly employable and talented people simply cannot relocate or commute. The crippling cost of housing, or the UK’s grotesquely expensive rail fares are, for many, huge barriers to employment within the centres of our publishing universe. But you have to “go where the work is,” right? But is that really true in this highly advanced economy—with (albeit not universal) access to high-speed communications, cloud-based software systems and mobile technology? Has our recruitment thinking, working patterns and management practices really failed to evolve at the speed of technology? Why can’t more work go to where the employees are? Yes, of course, most publishers use a lot of freelancers and contractors who work remotely but not everyone wants to be self-employed—many just want a job with a regular income. Recruitment agencies can play a big part here by being pro-active and asking employers if they’ll consider remote working, and on what terms—you’ll almost certainly attract more candidates too.

Obviously, no-one could sensibly claim that remote working is possible for all publishing jobs in every publisher, or that remote working has no impact on teams who are office-bound. Equally, not everyone wants to work remotely or has the temperament to do so. Without question, there are organizational, technical—and management culture—issues to consider: no-one should pretend there’s a secret panacea. However, unless there’s a conscious effort to look into providing remote-working opportunities, to document and identify the challenges and pro-actively address them, then publishers will continue to limit their recruitment options and, perhaps, draw from an unnecessarily restricted subset of our national talent. Employers who enable their employees to work remotely may be surprised at the level of commitment and dedication received in return—if someone desperately wants that job but needs to work remotely, and is given the opportunity to do so, chances are they’ll move heaven and earth to do their very best work for that employer.

Filed under: Uncategorized No Comments

The flavours of TeX—a guide for publishing staff: LaTeX, pdfTeX, pdfLaTeX, XeTeX, XeLaTeX, LuaTeX et al

Posted by Graham Douglas


This is not a technical article on using TeX (i.e, TeX installation or programming). Instead, it offers some background information for people who work in STM (scientific, technical and medical) publishing and aims to provide an easy-to-follow explanation by addressing the question “what is TeX?”—and, hopefully, demystifies some confusing terminology. My objective is, quite simply, to offer an introduction to TeX-based software for new, or early-career, STM publishing staff—especially those working in production (print or digital). Just by way of a very brief bio, as in “am I qualified to write this”: I’m writing this piece based on my 20+ years of experience of STM publishing, having worked in senior editorial positions through to technical production and programming roles. In addition, over the last few years I have spent a great deal of time building and compiling practically every TeX engine from its original source code, together with creating my own custom TeX installation to explore the potential of production automation through modern TeX-based software.


If you work in STM (scientific, technical and medical) publishing, especially within mathematics and physics, chances are that you’ve heard of something called “TeX” (usually pronounced “tech”)—you might also have encountered, or read about, authors using tools called LaTeX, pdfTeX, pdfLaTeX, XeTeX, XeLaTeX, LuaTeX, LuaLateX etc. Unless you are a TeX user, or familiar with the peculiarities of the TeX ecosystem, you may be forgiven for feeling somewhat confused as to what those terms actually mean. If you are considering working in STM publishing and have never heard of TeX, then I should just note that it is software which excels at typesetting advanced mathematics and is widely used by mathematicians, physicists, computer scientists to write and prepare their journal articles, books, PhD theses and so forth. TeX’s roots date back to late 1970s but over the intervening decades new versions have evolved to provide considerable enhancements and additional functionality. Those new to STM publishing, or considering it as a career, may be surprised to learn that a piece of software dating back to the late 1970s is still in widespread use by technical authors—and publishing workflows.

NOTE: TeX is not just for mathematics. It is a common misconception that the use of TeX is restricted to scientific and technical disciplines—typesetting of complex mathematics. Whilst it finds most users in those domains, TeX is widely used for the production of non-mathematical content. In addition to typesetting mathematics, modern TeX engines (XeTeX and LuaTeX) provide exquisite handling of typeset text, support for OpenType font technologies, Unicode support, OpenType math fonts (as pioneered by Microsoft Word), multilingual typesetting (including Arabic and other complex scripts) and output directly to PDF. LuaTeX, in particular, is incredibly powerful because it also has the Lua scripting language built into its typesetting engine, offering (for example) almost unlimited scope for the automated production/typesetting of highly complex or bespoke documentation, books and so forth. LuaTeX also provides you with the ability to write plugins to extend its capabilities. Those plugins are usually written in C/C++ to perform specialist tasks—for example: graphics processing, parsing XML, specialist text manipulation, on-the-fly database queries or, indeed, pretty much anything you might need to do as part of your document production processes. If you don’t want the complexities of writing plugins, chances are you can simply use the Lua scripting language to perform many of your more complex processing tasks.

Irrespective of the tools used by authors to write/prepare their work, the lingua franca of today’s digital publishing workflows—especially journals—is XML, which is generated from the collection of text and graphics files submitted by authors. Most publishers now outsource the generation of XML to offshore companies usually based in countries such as India, China or the Philippines. Many production staff usually do not have to worry (too much) about the messy details of conversion—provided the XML passes quality control procedures and is a correct and faithful representation of the authors’ work. The future is, of course, online authorship platforms which remove the need for this expensive conversion of authors’ work into XML—but we’re still some way from that being standard practice: old habits die hard, so Microsoft Word and TeX will be around for some time, as will the need for conversion into XML.

And so to TeX: A brief history in time

My all-time favourite quote comes from the American historian Daniel J. Boorstin who once noted that:

“Trying to plan for the future without a sense of the past is like trying to plant cut flowers.”

In keeping with the ethos of that quote I’ll start with a very brief history of TeX.

On 30 March 1977 the diary of Professor Donald Knuth, a computer scientist at Stanford University, recorded the following note:

“Galley proofs for vol. 2 finally arrive, they look awful (typographically)… I decide I have to solve the problem myself”.

That small entry in Professor Knuth’s diary was the catalyst for a programming journey which lasted several years and the outcome of that epic project was a piece of typesetting software capable of producing exquisitely typeset mathematics and, of course, text: that program was called TeX. Along the way, Knuth, and his colleagues, designed new and sophisticated algorithms to solve some very complex typesetting problems: including automatic line breaking, hyphenation and, of course, mathematical typesetting. As part of the development, Knuth needed to fonts to use with his typesetting software so he also developed his own font technology called METAFONT, although we won’t discuss that in any detail here.

To cut short a very long story, TeX proved to be a huge success—in no small part because Knuth took the decision to make TeX’s source code (i.e., program code) freely available, meaning that it could be built/ported, for free, to work on a wide range of computer systems. TeX enabled mathematicians, physicists, computer scientists and authors from many other technical disciplines to have exquisite control over typesetting their own work, producing beautifully typeset material containing highly complex mathematical content. Authors could use TeX to write and prepare their books and papers, and submit their “TeX code” to publishers—usually assured of a greater degree of certainty that their final proofs would not suffer the same fate as Knuth’s.

TeX: Knuth maintains his version, but others have evolved

Even today, nearly 4 decades after that fateful genesis of TeX, Professor Knuth continues to make periodic bug fixes to the master source code of his version of TeX—which is archived at and available from other sources, such as CTAN (Comprehensive TeX Archive Network). Those updates take place every few years with the latest being “The TeX tuneup of 2014” as reported in the journal TUGboat 35:1, 2014. During those “tuneups” Knuth does not add any new features to TeX, they really are just bug fixes. In the 1980s Knuth decided that in the interest of achieving long-term stability he would freeze the development of TeX; i.e., that no new features would be added to his version of  TeX. I specifically mentioned “his version of TeX” because Knuth did not exclude or prevent others from using his code to create “new versions of TeX” which have additional features and functionality. Those “new versions” are usually given names to indicate that whilst they are based on Knuth’s original they have additional functionality—hence the addition of prefixes to give program names such as pdfTeX, XeTeX and LuaTeX.

Huh—what about LaTeX? At this point you might be wondering why I have not mentioned LaTeX, and it is a good question. Just to jump ahead slightly, the reason I am not mentioning LaTeX (at this point) is because LateX is not a version of the executable TeX typesetting program—it is a collection of TeX macros, a topic which I will discuss in more detail below.

At this point, I’ll just use the term “TeX” (in quotes) to refer to the Knuth’s original version and all its later descendants (pdfTeX, XeTeX, LuaTeX).

So, what does “TeX” actually do?

As noted, “TeX” is a typesetting program—but if you have formed a mental image of a graphical user interface (GUI), such as Adobe InDesign, then think again. At the time of TeX’s genesis, in the late 1970s, today’s sophisticated graphical interfaces and operating systems were still some way into the future and TeX’s modus operandi still reflects its heritage—even for the new modern variants of TeX. Those accustomed to using modern page layout applications, such as Adobe InDesign, may be surprised to see how TeX works. Suppose someone gives you a copy of a “TeX” executable program and you want to use it to do something, how do you do that? “TeX” uses a so-called command-line interface: it has no fancy graphical screen into which you type your text to be typeset or point, click, tap to set options or configurations. If you run the “TeX” program you see a simple screen with a blinking cursor. Just by way of example, here’s the screen I see when I run LuaTeX (luatex.exe on Windows):


Clearly, if you want a piece of software to typeset something, you will need to provide some form of input (material to typeset) in order to get some form of output (your typeset material). Your input to the typesetting program will not only need to contain the material to be typeset but will also require some instructions to tell a typesetting program which fonts to use, the page size and a myriad of other details controlling the appearance of the typeset results. To typeset anything with “TeX” you provide it with an input text file containing your to-be-typeset material interspersed with “typesetting instructions” telling “TeX” how to format/typeset the material you have provided: i.e., what you want it to achieve. And here is where “TeX” achieves its legendary power and flexibility. The “typesetting instructions” that control “TeX’s” typesetting process are written using a very powerful programming language—one that Professor Knuth designed specifically to provide users with enormous flexibility and detailed control of “TeX’s” typesetting capabilities. So we can now start to see that “TeX” is, in fact, a piece of typesetting software that users can direct and control by providing it with instructions written in a programming language. You should think of “TeX” as an executable program (“typesetting engine”) which understands the TeX typesetting language.

A tiny example

Just to make it clear, here is a tiny example of some input to “TeX”—please do not worry about the meaning of the strange-looking markup (“TeX” commands that start with a “\”). The purpose here is simply to show you what input to “TeX” looks like:

$$\left| 4 x^3 + \left( x + {42 \over 1+x^4} \right) \right|.$$

And here is the output (as displayed in this WordPress blog using the MathJax-LaTeX plugin):

\[\left| 4 x^3 + \left( x + {42 \over 1+x^4} \right) \right|.\]

So, in order to produce your magnum opus you would write and prepare a text file containing your material interspersed with “TeX” commands and save that to a file called, say, myopus.tex and then tell your “TeX” engine to process that file. If all goes well, and there are no bugs in your “TeX” code (i.e., “TeX” programming instructions) then you should get an output myopus.pdf containing a beautifully typeset version of your work. I have, of course, omitted quite some detail here because, as I said at the start, this is not an article about running/using “TeX”.

“TeX” the program (typesetting “engine”) and “TeX” the typesetting language

So, the word “TeX” refers both to an executable program (the “TeX” typesetting engine) and the set of typesetting instructions that the engine can process: instructions written in the “TeX” language. Understanding that the executable “TeX” engine is programmable is central to truly appreciating the differences between LaTeX, pdfTeX, pdfLaTeX, XeTeX, LuaTeX and so forth.

Each “TeX” engine (program) understands hundreds of so-called primitive commands. Primitive in this sense does not mean “simple” or “unsophisticated”, it means that they are the fundamental building blocks of the TeX language. A simple, though not wholly accurate, analogy is the alphabet of a particular language: the individual characters of the alphabet cannot be reduced to simpler entities; they are the fundamental building blocks from which words, sentences etc are constructed.

And finally: from TeX to pdfTeX, XeTeX and LuaTeX

Just to recap. When Knuth wrote the original version of “TeX” he defined it to have the features and capabilities that he thought were sufficient to meet the needs of sophisticated text and mathematical typesetting based, of course, on the technology environment of that time—including processing and memory of available computers, font technologies and output devices. Knuth’s specification of “TeX” included its internal/programming design (“TeX’s” typesetting algorithms) and, of course, defining the “TeX” language that people can use to “mark up” the material to be typeset. What I mean by “defining the TeX language” is defining the set of several hundred primitive commands that the “TeX” engine can understand, and the action taken by the “TeX” engine whenever it encounters one of those primitives during the processing of your input text.

Naturally, technology environments evolve: computers become faster and have more storage/memory, new font technologies are released (Type 1, TrueType, OpenType),  file output formats evolve (e.g., the move from PostScript to PDF) and Unicode became the dominant way to encode text. Naturally, “TeX” users wanted those new technologies to be supported by “TeX”—in addition to incorporating ideas for, and improvements to, the existing features and capabilities of Knuth’s original TeX program. As noted earlier, in the 1980s Knuth decided to freeze his development of TeX: no more new features in his version—bug fixes only.  With the genuine need to update/modernize Knuth’s original software, TeX programming experts have taken Knuth’s original source code and enhanced it to add new features and provide support for modern typesetting technologies. The four-decade history of TeX’s evolution is quite complex but if you really want the full story then read this article by Frank Mittelbach: TUGboat, Volume 34 (2013), No. 1.

These new versions of TeX not only provide additional features (e.g., outputting direct to PDF, supporting OpenType fonts) they also extend and adapt the TeX language too: by adding new primitives to Knuth’s original set, thus providing users with greater programming power and flexibility to control the actions of the typesetting engine. Each new TeX engine is given its own name to distinguish it from Knuth’s original software: hence you now have pdfTeX, XeTeX and LuaTeX. These three TeX engines are not 100% compatible with each other and it is quite easy to prepare input that can be processed with one TeX engine but fail to work with others—simply because a particular TeX engine may support primitive commands that the others do not. But all is not lost: enter the world of TeX macros!

Primitives are not the whole story: macros and LaTeX

I have mentioned that each TeX engine supports a particular set of low-level commands called primitives—but this is not the full story. Of course, many of the same primitives are supported by all engines but some are specific to a particular engine. “TeX” achieves its true power and sophistication through so-called TeX macros. The primitive commands of an engine’s TeX language can be combined together to define new commands, or macros, built from low-level primitive instructions—and/or other macros. TeX macros allow you to define new commands that are capable of performing complex typesetting operations, saving a great deal of time, typing and programming errors. In addition, TeX engines provide primitives that you can use to detect which TeX engine is being used to typeset a document—so that a TeX engine can, on-the-fly, adapt its behaviour depending on whether or not it supports a particular primitive it might encounter. If a particular primitive is not supported directly but can be “mimicked” (using combinations of other primitives) then all is usually well—but if the chosen TeX engine really cannot cope with a particular primitive then typesetting will fail and an error will be reported.

The TeX language is, after all, a programming language—albeit one designed to solve typesetting problems; but as a programming language TeX is extremely arcane and works very differently to most programming languages you are likely to encounter today.

So, finally, what is LaTeX?

We’ve talked about various versions of the TeX engine—from Knuth's original TeX to its descendants of pdfTeX, XeTeX and LuaTeX—and briefly discussed TeX as a typesetting language: primitives, programming and the ability to write macros. Finally, we are in a position to discuss LaTeX. The logical extension to writing individual TeX macros for some specific task you want to solve, as an individual, is to prepare a collection of macros that others can also use—a package of macros that collectively provide some useful tools and commands that others can benefit from. And that is precisely what LaTeX is: it is a very large collection of complex and sophisticated macros designed to help you typeset books, journal papers and so forth. It provides a wealth of features to control things like page layout, fonts and a myriad of other typesetting details. Not only that but LaTeX was designed to be extensible: you can plug-in additional, more specialist, macro packages written to solve specific typesetting problems—e.g., producing nicely typeset tables or typesetting  particularly complex forms of mathematics. If you visit the Comprehensive TeX Archive Network you can choose from hundreds, if not thousands, of macro packages that have been written and contributed by users worldwide.

So, if someone says they are typesetting their work with LaTeX then they are only telling you part of the story. What they really mean is that they are using the LaTeX macro package with a particular TeX engine—usually pdfTeX but maybe  XeTeX (for multilingual work) or LuaTeX (perhaps for advanced customized document production). Sometimes you will see terms such as pdfLaTeX, XeLaTeX or even LuaLaTeX: but these are not actually the names of TeX engines, all they signify is which TeX engine is being used to run LaTeX. For example, if someone says I am “using pdfLaTeX” what that really means is “I am preparing my typeset documents using the LaTeX macro package and processing it with the pdfTeX engine”. Equally, if anyone says to you that they are “using TeX” then, I hope, you now see that statement does not actually tell you the whole story.

Filed under: TeX (general) No Comments

A note on a “gotcha” when Building TeX Live from source (on Windows) [updated]

Posted by Graham Douglas

Post-publication update: GNU gawk

Since publication of the article below, subsequent investigations with a member of the TeX Live team have identified the exact cause of the problem: An outdated version of GNU's gawk command-line tool (used during compilation). I had been using version 3.1.7 of GNU's gawk (supplied with the MSYS distribution I was using) but after updating it to version 4.0.2 the line-ending problem no longer arises. If you are using MSYS on Windows, and want to compile TeX Live..., check the version of gawk installed on your machine. As I say, you live and (re)learn...

Original article

Just a short note to share the solution to a problem I experienced when trying to compile Tex Live from the C/C++ source distribution... on Windows. I have a bit of relevant experience because I regularly compile LuaTeX from source and have built other TeX engines–including Knuthian TeX from raw WEB code and some versions of XeTeX.

So, with that experience, I decided to have a go at building TeX Live from the source file distribution–it's useful to be able to build and use the latest versions of TeX-related software. Using SVN (via the Tortoise SVN client) I checked out the TeX Live source directory and tried to build it using MinGW64/MSYS64. I read through the notes in README.2building (supplied with the TeX Live source) and followed the example to build dvipdfm-x. Running the Build/configure scripts (using the --disable-all-pkgs option) worked fine but, sadly, compilation failed with a cascade of errors... so I wanted to find out why.

Unquestionably, TeX Live is a truly impressive work of considerable complexity and, of course, it should build OK on Windows–so I figured that the problem must be a relatively minor one to do with my setup. However, tracking it down initially felt like "looking for a needle in a haystack", to quote a well-known English figure of speech. Well, after a couple of days I found the problem... line endings in some key text files! When I checked out the source via SVN some key files ( and similar *.in files) had been saved with Windows line endings (CR+LF) rather than Linux endings of LF only. Running the top-level TeX Live Build/configure scripts generates a config.status shell script file for each component/sub-system that has to be compiled. As the config.status scripts execute, they create a number of temporary files which are processed and deleted on-the-fly. To stop these temporary files being deleted (to assist my bug hunt) I used a simple trick of adding the line alias rm='echo' at the start of one of the config.status shell scripts (which are generated by configure).

I discovered that the config.status scripts generate a temporary file called defines.awk–which is a script designed to be executed by the AWK program. The purpose of defines.awk is to process "template" configuration files (called (and similar)) to generate various config.h files that contain important settings (#defines) detected during the configuration process (i.e., during the execution of configure). These config.h files vary for each program you are building and are essential for successful compilation. Well, it turned out that the defines.awk script was failing to correctly parse the files (and similar) simply because the Windows line endings were causing a vital regular expression (in defines.awk) to fail. This resulted in the config.h files being a copy of because none of the text replacements had worked due to failure of the AWK regular expression. Not surprisingly, erroneous config.h files caused the spectacular failure of compilation I experienced on my first attempt. Re-saving the files (and some other *.in files) with Linux line endings seems to have solved the problems.

And yes, so far all the TeX-related programs I have tried to build have compiled successfully. This is not the first time I have been "bitten" through problems caused by Linux/Widows line endings... so I guess you always live and (re)learn.


TeX’s DVI file preamble: deriving the values of num = 25400000 and den = 473628672

Posted by Graham Douglas


If you are at all interested in the innards of TeX's DVI files you might find the following article of some help – a quick post, in the form of a PDF, deriving the values of num = 25400000 and den = 473628672.

Download PDF


Lua-scriptable PATGEN – i.e., PATGEN 2.4 with a Lua binding…

Posted by Graham Douglas

PATGEN: from WEB to C

I recently became curious about TeX's hyphenation patterns and started to read about how they are created – usually using PATGEN though, from what I've read, some brave souls do actually hand-craft hyphenation patterns! I decided to build PATGEN 2.4 from source code which, of course, means converting the PATGEN WEB source to C code via Web2C. Some time ago I went through the process of building my own Web2C executable for Windows (see this article for more details). I won't go into the specifics of doing the conversion but I was able to create patgen.c – the resulting C code is less than 2,000 lines long. I also spent some time re-formatting the C code simply because the Web2C process of machine-generated C does not aim for beauty, just functionality. I removed all dependencies on Kpathsea and generally tidied the code to create clean, stripped-down code that is easy to compile.

Understanding PATGEN: not so easy

PATGEN is, of course, a very highly specialized program and one that is designed for expert users who really need to use it. As a non-expert looking to understand just the basics I found that there was very little step-by-step "beginners" material – although a search on provided some useful "snippets" and the tutorial "A small tutorial on the multilingual features of PatGen2" by Yannis Haralambous was very helpful. There are, of course, a number of articles, by luminaries and experts, on specific uses of the PATGEN program; however, for me anyway, it was a case of piecing together the puzzle... reading the PATGEN documentation, source code plus some parts of Frank Liang's thesis Word Hy-phen-a-tion by Com-put-er which describes the hyphenation algorithms that PATGEN implements.

Running PATGEN

To run PATGEN you need to provide it with the names/paths of (up to) four files (some can be "nul" if you are not using them):

PATGEN dictionary_file starting_patterns translate_file output_patterns

TIP: I created a PDF file of PATGEN's documentation that you can download here. Some information on the files you provide to PATGEN are discussed in sections 1 to 6 in the first few pages of the documentation.

In very brief outline, the files you provide on the command line are

  • dictionary_file: A pre-prepared list of hyphenated words from which you want to generate hyphenation patterns for TeX to use.
  • starting_patterns: (can be "nul", i.e., it is not mandatory) Best to read the description(s) in the documentation (link above).
  • translate_file: (can be "nul", i.e., it is not mandatory) From the documentation "The translate file may specify the values of \lefthyphenmin and \righthyphenmin as well as the external representation and collating sequence of the `letters' used by the language. It also specifies other information – see the documentation for further details (section 54).
  • output_patterns: the output from PATGEN – a file of hyphenation patterns for use with TeX.

PATGEN: questions, questions...

In order to work its magic, PATGEN makes multiple passes through the dictionary_file as it builds the list of hyphenation patterns. As it performs the processing PATGEN stops to ask you for input: it needs your help at various stages of the processing. Now, I'm not going to go into the details of those questions simply because I'm not sufficiently experienced with the program to be sure that I'd be giving sensible advice. Sorry :-(.

Answering questions via Lua

So, finally, to the main topic of this post. As noted, during processing PATGEN asks you to provide it with some information to guide the pattern-generation process: those details concern the hyphenation levels, pattern lengths plus some heuristics data that assist PATGEN to choose patterns. Ultimately, the answers you give to PATGEN are integer values that you enter at the command line. However, it's a bit frustrating to keep answering PATGEN's questions so I wondered if it would be possible to "automate" providing those answers and, in addition, create a Dynamic Link Library (DLL) that I could use with LuaTeX – perhaps something very basic to start with, like this:


local pgen=require("patgen")


In the above code, require("patgen") will load a DLL (patgen.dll) and return a table of functions that let you set various parameters for PATGEN and then run it to return the pattern list as a string that you can subsequently use with LuaTeX. Note, LuaTeX does NOT require INITEX mode to use hyphenation patterns.

Calling Lua code (functions) from patgen.dll

The above simple scenario does indeed work and it's quite easy to implement this. Firstly, within PATGEN's void mainbody(void) routine you can replace the code that stops to ask you questions – such as the request for the start/finish pattern lengths:

Fputs(output, "pat_start, pat_finish: ");
input2ints(&n1, &n2);

The above code uses a function input2ints (int *a, int *b) to request two integers:

void input2ints (int *a, int *b)
int ch;
while (scanf (SCAN2INT, a, b) != 2)
while ((ch = getchar ()) != EOF && ch != '\n');
if (ch == EOF) return;
fprintf (stderr, "Please enter two integers.\n");

while ((ch = getchar ()) != EOF && ch != '\n');

You can replace this with your own function, say get_pattern_start_finish(&n1, &n2) which can, for example, call a function in your Lua script to work out the values you want to return for n1 and n2 (values for pat_start, pat_finish). Perhaps you might store those values in Lua as a table. At the time of writing I've not yet written that part but, at the moment, from the Lua/C module I just return some hardcoded answers. The next step simply requires making a call from the Lua C API to a named Lua script function that works out the values you want to provide. This gives the most flexibility because the logic is all contained in your Lua code which, of course, makes it very quick and easy to experiment with different settings to generate different patterns. This technique can also be used for other parameters that PATGEN asks for.

Returning the generated pattern(s)

Within PATGEN, there is a function called zoutputpatterns(...) which generates the hyphenation patterns and writes them out to a file. I'm experimenting with a function which "wrappers" this into another function that uses a C++ stringstream object to capture/save the pattern text – rather than writing it to a file. To do this simply required modifying zoutputpatterns(...) to pass in the stringstream object and output patterns (character data) to the stringstream rather write the data than a physical file. Once finished, you can then get access to the stringstream's stored data as a C-style string (containing the generated hyphenation patterns) which you can return to Lua, thus to LuaTeX.

using namespace std;
void do_output_patterns(int i, int j)
std::stringstream *ss;
ss=new std::stringstream();
zoutputpatterns (i , j, ss) ;
std::cout << ss->str().c_str() << endl; //you can then pass the string of patterns back to Lua and thus LuaTeX delete ss; }

In conclusion

This is just a quick summary of a work-in-progress but it looks like it will provide a nice way for fast/rapid experimentation with PATGEN. It seems to offer dynamic generation of hyphenation patterns and provides a method to fully script PATGEN's activities and thus very quickly understand the effect of the parameters PATGEN asks you to provide. If there is any interest I might (eventually) release it once I'm happy that it's good enough.