leah blogs

20apr2023 · A conservative extension of ISO 8601 to support fractional days

You probably have seen ISO 8601 timestamps with fractional seconds, such as this one:

% date --iso-8601=ns 
2023-04-20T18:45:11,094052607+02:00

However, many people don’t know ISO 8601 also allows for fractional minutes and hours!

According to the standard, these timestamps are equivalent (rounded to a second):

2023-04-20T18:45:11
2023-04-20T18:45,18333
2023-04-20T18,75305

Note that in contrast to common scientific usage, the decimal part is recommended to be separated by a comma and not a full stop, although the latter is permitted too.

However, the standard does not specify the obvious next generalization, that is, allowing fractional days. I thus propose to extend ISO 8601 in the following way, which does not change the meaning of valid existing representations:

The local time representation (after the optional time designator) may consist of only a decimal fraction, which then is interpreted as a multiple of 24 hours.

Thus, we can write the above timestamp also like this:

2023-04-20T,78137
2023-04-20,78137

Now, why would one want this? Essentially, there are three reasons:

First, it’s cute and an obvious extension of the existing format.

Second, it allows representing times of the French Republican Calendar in a natural way, which uses a decimal system as well: in this calendar, the day is divided into 10 hours of 100 minutes and 100 seconds each. Thus, the digits align directly to a decimal fraction of the whole day. The above timestamp is then (computed using fdate):

Primidi, 1 Floréal CCXXXI (231) 7:81:37

Note that we use local time here, not Paris time. If you insist on using Paris solar time, you need to offset 9 ISO minutes and 21 ISO seconds, which can be approximated as

2023-04-20T,77350+0009

Note that ISO 8601 does not allow for specifying offsets from UTC in seconds (another obvious oversight).

Finally, the mechanism also supports the use of Swatch Internet Time, a late 90s decimal time system. Here, the day is divided into 1000 beats, and the offset is fixed UTC+1 (for the Swatch headquarters in Biel):

2023-04-20T,739+0100

This is a bit more verbose than @739 but at least it’s an international standard already!

NP: Tristan Brusch feat. Annett Louisan—Kein Problem

24dec2022 · Merry Christmas!

Picture of Kropotkin with a Christmas hat

Frohe Weihnachten, ein schönes Fest, und einen guten Rutsch ins neue Jahr wünscht euch
Leah Neukirchen

Merry Christmas and a Happy New Year!

NP: First Aid Kit—Fallen Snow

11oct2022 · 50 blank pages, or: black-box debugging of PDF rendering in printers

I was happily typesetting a book with ConTeXt and its new engine LMTX when I decided to print a few pages to see if I got the sizes right. Since I despise printer daemons, I directly print stuff over the network using a little script.

To my surprise, I just got a blank page out.

As the title of this post suggests, it won’t be the only blank page. This is the story of me debugging PDF generation in LMTX.

Giving it a closer look, the page wasn’t entirely blank. I could see the cutmarks I added to indicate the page size. During creation, I used MuPDF as a previewer—it’s lightweight and stays out of my way. But apparently the PDF was broken, so I tried a few other previewers. Evince and Firefox pdf.js rendered it fine. I looked at Okular and xpdf, and it came out nicely as well. At a later point, I even installed ancient Acrobat 7(!) and it would display as intended.

I tried the other printer in our university office. Another blank page.

Two different vendors, yet they both fail to print a simple PDF?

I secretly hoped some previewer would also render a blank page. Then I could just compile its source code on my machine and throw all kinds of debugging tools at it… but they all worked.

I tried converting the PDF to PDF with Ghostscript, and then it printed fine. So the PDF couldn’t be too wrong. But I wanted to fix it directly.

So how do you debug a PDF that gets printed wrongly but seems to be fine else?

My first intuition was to make a PDF that works, and then look at the differences. So I created a simple document and ran it through the previous ConTeXt version, called MKIV. This version uses LuaTeX as an engine. It printed fine. (No nobody’s surprise—I would have discovered this years ago else.)

I put both PDFs through various PDF validators, but they all said both where good.

Time to dig deeper. I disabled PDF compression and looked at both PDF files in a text editor. Sure, there were a lot of little differences. But fundamentally? Pretty much the same.

… I looked at the first printout again. Not only the page marks were printed, but actually the tiny page numbers inside them were too! I checked the PDF, and saw it uses two fonts (using pdffonts):

NSOLKP+TeXGyreSchola-Regular         CID Type 0C       Identity-H       yes yes yes      4  0
FFMARX+DejaVuSansMono                CID TrueType      Identity-H       yes yes yes      5  0

The page numbers use the DejaVu Sans font, which is supplied in TrueType format. I changed the main font of my test document to DejaVu Sans, and voilà: it printed fine. I was very happy about this, as it meant LMTX can generate printable PDF files in principle. But for its default font (Latin Roman) and the font I wanted to print in (TeX Gyre Schola), there apprently was an issue.

I knew the basics of PDF from decades ago when I wrote a PDF generator from scratch. (I never got around doing more than putting a few characters on a page, though.) Now it was time to learn about the PDF font formats.

Both the MKIV and the LMTX engine use the “CID Type 0C” font format these days, which embeds only the actually used glyphs from an OpenType font into the PDF. I pulled out the CID fonts from the PDF (using mutool extract). While file didn’t recognize the file format, luckily FontForge could open it fine. (As I learned later, FontForge can open the PDF directly and import its fonts.)

I noticed a first difference: while MKIV (and thus LuaTeX) spread out the glyphs over the positions, LMTX nicely arranged the used glyphs starting from code point 1. I had already contacted Hans Hagen, the main developer behind ConTeXt, and we wondered whether starting the glyphs from 31 would help… again it rendered nicely on all previewers, but still printed blank pages.

I had the strong suspicion that the font embedding was the problem. To verify this hypothesis, I manually fiddled the LMTX font into the MKIV document (this was easy because it was smaller, so I just had to add some padding to make the document be valid again), adjusted some code points in the PDF, and it would render glyphs on the screen. But it would not print. So now I was fairly sure that the font stream was the culprit, and not some other part of the PDF.

After more research, I found a tool to dump a CID font in a readable format: CFFDump. This small Java program turned out to be essential for tracking down the bug.

It generates a dump that looks like this:

% CFF Dump Output
% File: font-0008.cid


--------------------------------------------------------------------------------

Header (0x00000000):
    major: 1
    minor: 0
    hdrSize: 4
    offSize: 4

--------------------------------------------------------------------------------

Name INDEX (0x00000004):
  count: 1, offSize: 1
    [0]: (CLLXEY+LMRoman12-Regular)

--------------------------------------------------------------------------------

Top DICT INDEX (0x00000021):
  count: 1, offSize: 1
  [0] (0x00000026):
  <<
    /ROS << /Registry (Adobe) /Ordering (Identity) /Supplement 0 >>
    /CIDCount 15
    /FamilyName (LMRoman12)  % SID 392
    /FullName (LMRoman12-Regular)  % SID 393
    /Weight (Normal)  % SID 394
    /FontBBox [-422 -280 1394 1127]
    /isFixedPitch false
    /ItalicAngle 0
    /UnderlinePosition -175
    /UnderlineThickness 44
    /CharstringType 2
    /FontMatrix [0.001 0 0 0.001 0 0]
    /StrokeWidth 0
    /CharStrings 257  % offset
    /charset 220  % offset
    /FDArray 1751  % offset
    /FDSelect 249  % offset
    /Private [23 1728]  % [size offset]
    % ----- Following entries are missing, so they get default values: -----
    /PaintType 0  % default
    /CIDFontVersion 0  % default
    /CIDFontRevision 0  % default
    /CIDFontType 0  % default
  >>

And it goes on and on, detailing all the things specified in the font.

Inevitably, I had to dig into the internals of CFF fonts, that is Adobe’s Technical Note #5176.

I carefully compared the dump of the working MKIV font with the broken LMTX font… and didn’t find substantial differences. Sure, one copied a few more metadata fields, and the other had more font fields set, but mostly to values that were the default anyway. Nothing that seemed to be related to our bug. And also, the various PDF viewers rendered the document fine, so there couldn’t have been a major mistake there.

By now I had learned about the design of LMTX, and luckily I saw that all parts of this font embedding were written in quite straight-forward Lua code that I could easily modify, so experiments were easy. Unfortunately, I didn’t have a printer at home so I had to annoy some of my friends to do test prints for me. They printed a lot of blank pages…

But I just couldn’t track down the problem. A reasonable person would have given up ages ago and just fed the PDF through Ghostscript before printing, but I wanted to get to the bottom of the thing; and I also wanted this new TeX engine to produce working documents out of the box.

In my time as a software developer, one thing I learned about debugging is that if a thing takes a long time to debug, it can be for two reasons: either the cause is much more simple than you thought, or it’s much more complicated.

I chose violence. I corrupted the CID font in various ways… the printer would stop working and printed an error message instead. Some printers have an internal error log, but before these experiments it was empty.

Perhaps the document wasn’t wrong, but the printer software was? But by now we could reproduce the issue with a bunch of printers—how can they all have the same issue?

After some wrong attempts related to font hinting, I was out of ideas and decided to kill all fields one by one and check if it made any difference.

I deleted the /FontMatrix entry and… suddenly it printed nicely.

Now, the font matrix is a feature of CFF fonts to encode their scaling and shearing factors. It’s a 2x3 matrix that encodes an affine transformation (perhaps you know this from SVG). The details don’t matter, but in practice you only have two values set and they determine the font size relative to the sizes used in the font drawing instructions. By default, the font matrix is [0.001 0 0 0.001 0 0], meaning that moving by 1000 units will move by 1 PostScript point on paper.

I was happy, but I also was very confused: of all things, why exactly did that fix it? I noticed earlier that the MKIV document didn’t have the font matrix set, but I also looked at the Ghostscript output and there it worked fine. Even more so, LMTX set the font matrix to its default value! It shouldn’t make a difference at all!

Gone this far, I wasn’t satisfied without a real answer. I wondered if LMTX encoded the font matrix the wrong way, but after digging into the spec for that (Technical Note #5177) and double checking, it seemed fine. The working Ghostscript PDF used exactly the same byte sequence to encode the font matrix.

Staring some more at CFFDump output, I finally noticed what Ghostscript did differently: the CFF had two font matrices defined! CFF allows defining a font matrix in the “Top DICT INDEX” as well as the “Font DICT INDEX”.

And while the “Top DICT INDEX” was the same that we used, [0.001 0 0 0.001 0 0], the one in the “Font DICT INDEX” was [1 0 0 1 0 0], i.e. the identity matrix. I added this matrix to LMTX output, and finally the PDF printed properly.

Still, this was a surprise. Why would explicitly setting the font matrix to its default value change the behavior? It turns out the reason for this is an interaction between both of these default values. Unfortunately, it seems to be not specified by Adobe. I found a similar bug in Ghostscript that explains the reasonable thing to do:

1) If both Top DICT and Font DICT does _not_ have FontMatrix, then Top DICT = [0.001 0 0 0.001 0 0], Font DICT 
= [1 0 0 1 0 0].  (Or, Top DICT = (absent), Font DICT = [0.001 0 0 0.001 0 0] then let '/CIDFont defineresource' 
make Top DICT = [0.001 0 0 0.001 0 0], Font DICT = [1 0 0 1 0 0].)

2) If Top DICT has FontMatrix and Font DICT doesn't, then Top DICT = (supplied matrix), Font DICT = [1 0 0 1 0 0].

3) If Top DICT does not have FontMatrix but Font DICT does, then Top DICT = [1 0 0 1 0 0], Font DICT = 
(supplied matrix).  (Or, Top DICT = (absent), Font DICT = (supplied matrix) then let '/CIDFont defineresource' 
make Top DICT = [0.001 0 0 0.001 0 0], Font DICT = (supplied matrix 1000 times larger). I think this is better.)

4) If both Top DICT and Font DICT _does_ have FontMatrix, then Top DICT = (supplied matrix), Font DICT = 
(supplied matrix).

All previewers seem to have adapted this algorithm. But certain older printers botched step 2. They end up with two font matrices [0.001 0 0 0.001 0 0] that are multiplied together, which ends up printing your document at a thousandth of its size; i.e. you get a blank page. But note that it’s a perfectly valid PDF!

We thus had two ways to fix the bug: write no font matrix at all, or write both of them. I was first learning towards the latter, and do it as Ghostscript does, but we found an issue with FontForge that it will render the fonts internally at 1000x the size and thus consume a lot more memory. Since we did not find a need to use a non-default font matrix, we decided to go with the former: no font matrix at all. After all, it worked fine for LuaTeX all those years, too.

(Why did this issue not affect the TrueType font? It’s embedded in a different format that only has a single scaling factor and has no concept of a font matrix.)

A trial print of the PDF on many printers is on-going but seems to be very promising so far, so that this fix (essentially, deletion of one line of code) will be shipped soon in a ConTeXT snapshot for general availability.

I would like to thank Hans Hagen for not giving up on helping me with this, and all my friends that test-printed some page for me and/or had to hear me talking about nothing else for a week or so.

NP: Rites of Spring—All Through A Life

26mar2022 · Note taking in Emacs with howm

Prelude and Motivation

After trying out and fiddling with a plethora of existing and self-written software to organize my notes, I have decided I need to stop experimentation and choose a solution that is sufficient, but most importantly, one that I actually will use and have migrated all my existing notes to. Note taking systems are not an end unto themselves.

Roughly speaking, my essential requirements are:

  • Something that works with plain text, and ideally supports Markdown as that is the syntax I publish most things in and I am most familiar with.
  • Something that can be used from Emacs, because that’s where I do most my text editing and writing.
  • Something that stores a note per file. This has just proved to be the most future-proof way.
  • Some basic means of connecting notes.

I found I don’t need these things:

  • Support for direct HTML publishing, as I realized that most of my notes are written for myself and I’ll put them up somewhere else for publishing. (This usually involves prior editing anyway.)
  • Having a fancy UI and graph displays. I consider these goodies I can easily go without.
  • Specialized productivity features like to-do lists, date scheduling or time tracking. These are out of scope for my use: I use a regular calendar for things with a deadline (which I’m blessed to have very few of) and will stick to simple to-do lists for my personal projects.

Many people would recommend me org-mode now, but I’ve never been a fan of its clunky syntax and I really don’t need most of the features. org-roam at first looked promising but linking between notes is quite complicated and the database dependency seems to be overkill for a personal note taking system on modern hardware.

I decided to settle on howm, an Emacs mode that is not very well-known in the Western world but has gained a certain following in Japan.

It’s a roughly 20-year-old Emacs mode that’s still being used and maintained by its original author Kazuyuki Hiraoka, so I have confidence it will be around for some more time. You can format notes however you like, so I can use Markdown as I prefer. Notes are one file per note by default (but see below). It actually has features for date scheduling and to-do lists, but I won’t go deeper into them for now. Its code is reasonable simple and well structured, so I was able to extend it in a few ways easily, as I’ll detail at the end of this post.

What really sold me was this quote I found on the mailing list:

I cannot show a good general guide because I’m lazy, loose, and bad at tidying. I’ve already given up well-ordered notes. Howm may be suitable for those who are tired from strict systems.
— Kazuyuki Hiraoka

Such an undogmatic system is just right for my purposes.

How howm works

Since there are very few resources in English that explain howm (apart from this 2006 article that got lost in time), I shall give a quick introduction for the interested reader:

howm is short for “Hitori Otegaru Wiki Modoki”, which roughly translates to “Single-user Easy Wiki Mode”.

The basic feature set of howm is very simple, and can be condensed into the mantra “write fragmentarily and read collectively”. Essentially, howm provides an Emacs minor mode for marking text to trigger certain searches. Since it is a minor mode, you can use whatever major mode you like to use for your writing (e.g., I currently use markdown-mode).

Howm notes are kept in a directory hierarchy that you can organize how you wish; by default a date-based system is used and filenames include the current timestamp at creation. This provides unique and sensible note identifiers, so I stick to it. You also can create explicitly named note files, but I haven’t found a use for them yet.

There are two essential kinds of markup howm cares about: note titles and links. By default, titles are marked up by putting a ‘=’ at the beginning of the line, but this can be configured (and must be done so before loading howm-mode(!)). The benefit of adding a title to a note is that you can have the summary buffer show titles instead of matching lines, which can be helpful to get a better overview of search results. (The creator of howm is sceptical of titling all notes, I think it very much depends the average length of your notes. There is no requirement to use titles.)

There are two kinds of links supported by howm, namely goto and come-from (in a nod to INTERCAL). goto links are forward references and written like this:

>>> howm

Pressing return on this line when howm-mode is enabled will show a list of all occurences of the word howm in your notes directory.

In contrast, a come-from link is written like this:

<<< howm

And this will cause the word howm in any howm-mode buffer to be underlined and trigger a search where the buffer with <<< howm will appear first.

Thus, compared to most contemporary hypertext systems, we not only have a means of explicitly linking to other notes, but also for creating implicit contextual links—which can help you find connections between your notes that you didn’t see yet…

It is straightforward to implement something like #tags or WikiWords using these features, if you wish to do so.

Additionally, howm provides an inline link syntax [[...]] that works like >>> but can appear within a line. I make a suggestion below how to turn it into a direct link to the first page with the given title; but for now I decided not to use this very much.

The line-based nature of the >>> syntax prevents usage for “inline” links. After giving it some thought, I consider it a strength for a note-taking system to make forward links related to a paragraph and not part of a sentence. Additionally, it also makes the plain text easier to read as the link target is not interleaved into the text. (Compare with the use of reference-style links in Markdown.)

An aside: howm actually supports multiple notes per file by having multiple title lines in a file. The search summary will always show the title directly before the matching line. You can use C-c , C to create a new note in the current buffer. I don’t use this much, but I think it could be useful for glossary files that contain many short notes. Some people also use this for keeping multiple daily notes in a single file.

Using howm

Howm provides two main features to access notes: the menu and the summary buffer. The howm menu (shown with C-c , ,) provides a very customizable view into your howm notes. By default it shows your schedule, recent notes, and random notes. You can access many howm features with a single keypress from the menu. Since I don’t use the scheduling feature, I mostly access howm from the summary buffer instead.

The howm summary buffer

The howm summary buffer shows the result of a search (C-c , g), a list of recent notes (C-c , l), or an overview of all notes (C-c , a). It is very convenient to see the matches and you get a preview of the note when you move the cursor to a search result. Typing RET will open the note for editing. Typing T will toggle between displaying matching lines or the titles of notes with matches.

In the summary buffer, you can also type @ and read all matching notes in a concatenated way, so you get the full context of all notes at once.

Setting up and customizing howm

Basic setup is reasonably well documented in English, but I’ll summarize it here. You can get howm from ELPA these days, so installing is very easy. You should set some variables to configure it according to your needs:

;; Directory configuration
(setq howm-home-directory "~/prj/howm/")
(setq howm-directory "~/prj/howm/")
(setq howm-keyword-file (expand-file-name ".howm-keys" howm-home-directory))
(setq howm-history-file (expand-file-name ".howm-history" howm-home-directory))
(setq howm-file-name-format "%Y/%m/%Y-%m-%d-%H%M%S.md")

Here, we decide that ~/prj/howm is the base directory for our howm notes, and we also put the two auxiliary files howm uses there. Additionally, we change the default name format to end with .md (which also turns on markdown-mode by default).

Next, we want to use ripgrep for searching howm. For my usage, plain GNU grep would be sufficient, but I want to use ripgrep in the next step too, so for consistency let’s use it for all searches:

;; Use ripgrep as grep
(setq howm-view-use-grep t)
(setq howm-view-grep-command "rg")
(setq howm-view-grep-option "-nH --no-heading --color never")
(setq howm-view-grep-extended-option nil)
(setq howm-view-grep-fixed-option "-F")
(setq howm-view-grep-expr-option nil)
(setq howm-view-grep-file-stdin-option nil)

The next addition is interactive search with ripgrep (C-c , r). This is the most useful feature I added to howm myself. I think it provides a great way to interact with your notes, as you get instant feedback from your search terms, and can stop searching as soon as you found what you were looking for. I used counsel-rg as an inspiration for this, and we turn the ripgrep matches into a regular howm summary buffer for further consumption.

;; counsel-rg for howm
(defun howm-list--counsel-rg (match)
  (if (string= match "")
  (howm-list-all)
(if (or (null ivy--old-cands)
	(equal ivy--old-cands '("No matches found")))
        (message "No match")
  (let ((howm-view-use-grep
	 #'(lambda (str file-list &optional fixed-p force-case-fold)
                 (mapcar
                  (lambda (cand)
		(if (string-match "\\`\\(.*\\):\\([0-9]+\\):\\(.*\\)\\'" cand)
                        (let ((file (match-string-no-properties 1 cand))
			  (line (match-string-no-properties 2 cand))
			  (match-line (match-string-no-properties 3 cand)))
                          (list (expand-file-name file howm-directory)
                                (string-to-number line)
                                match-line))))
                  ivy--old-cands))))
        (howm-search ivy--old-re t)
        (riffle-set-place
     (1+ (cl-position match ivy--old-cands :test 'string=)))))))

(defun howm-counsel-rg ()
  "Interactively grep for a string in your howm notes using rg."
  (interactive)
  (let ((default-directory howm-directory)
        (counsel-ag-base-command counsel-rg-base-command)
        (counsel-ag-command (counsel--format-ag-command "--glob=!*~" "%s")))
    (ivy-read "Search all (rg): "
	      #'counsel-ag-function
	      :dynamic-collection t
	      :keymap counsel-ag-map
	      :action #'howm-list--counsel-rg
	      :require-match t
	      :caller 'counsel-rg)))

(define-key global-map (concat howm-prefix "r") 'howm-counsel-rg))

Next, I tweak some sorting settings. I want the “recent” view to list files by mtime (so that recently edited files appear on top), but the “all” view should be sorted by creation date.

;; Default recent to sorting by mtime
(advice-add 'howm-list-recent :after #'howm-view-sort-by-mtime)
;; Default all to sorting by creation, newest first
(advice-add 'howm-list-all :after #'(lambda () (howm-view-sort-by-date t)))

A great usability enhancement is buffer renaming: since howm file names are a bit unwieldy (like ~/prj/howm/2022/03/2022-03-25-162227.md) you can use these two lines to rename note buffers according to their title, which makes switching between multiple notes more convenient.

;; Rename buffers to their title
(add-hook 'howm-mode-hook 'howm-mode-set-buffer-name)
(add-hook 'after-save-hook 'howm-mode-set-buffer-name)

Another personal preference is enabling orgalist-mode, which I like for shuffling around Markdown lists.

(add-hook 'howm-mode-hook 'orgalist-mode)

Finally we fix an anti-feature in howm: by default, it binds C-h to the same binding as backspace, but this is only useful on legacy terminals (and even then Emacs does the translation). I wouldn’t really mind, but it breaks the Emacs help feature, so we unbind C-h for the modes:

(define-key howm-menu-mode-map "\C-h" nil)
(define-key riffle-summary-mode-map "\C-h" nil)
(define-key howm-view-contents-mode-map "\C-h" nil)

My configuration ends with three definitions of action-lock, the howm mechanism for marking text active and do something on RET. Two of them are related to the reference management software Zotero, which I use for organizing papers, and enable me to link to articles in my Zotero database by URL or BibTeX identifier:

;; zotero://
(add-to-list 'action-lock-default-rules
             (list "\\<zotero://\\S +" (lambda (&optional dummy)
                                         (browse-url (match-string-no-properties 0)))))
;; @bibtex
(add-to-list 'action-lock-default-rules
             (list "\\s-\\(@\\([a-zA-Z0-9:-]+\\)\\)\\>"
                   (lambda (&optional dummy)
                     (browse-url (concat "zotero://select/items/bbt:"
                                         (match-string-no-properties 2))))
                   1))

Finally, as mentioned above, this is how to make [[...]] wiki-links directly point to the first page with that title, skipping the summary buffer:

;; make wiki-links jump to single title hit if possible
(add-to-list 'action-lock-default-rules
             (list howm-wiki-regexp
                   (lambda (&optional dummy)
                     (let ((s (match-string-no-properties howm-wiki-regexp-pos)))
                       ;; letting create-p be nil here, howm-keyword-search-subr
                       ;; should check create-p after open-unique-p
                       (howm-keyword-search (concat "= " s) nil t)))
                   howm-wiki-regexp-hilit-pos))

The whole configuration is part of my .emacs file.

Future ideas

One thing I want to implement but didn’t yet get around to is support for searching notes using ugrep, which has a nifty boolean search mode that applies to whole files, so you can do searches that are not limited to a line context (e.g. hoge|fuga -piyo finds all notes that mention hoge or fuga, but don’t contain piyo).

I may also look into the scheduling features of howm, but I direct you to the terse README for now, if you’re curious.

Anyway, I hope this was interesting and perhaps encourages you to look into howm, an Emacs mode that I feel doesn’t receive the attention it deserves.

NP: Bob Dylan—What Was It You Wanted

08jan2022 · How to check you're in the initial pid namespace?

It all started with a simple question: how can a Linux process determine whether it is the init process of a freshly booted system?

A dozen years ago, the Unix textbook answer to this would have been: well, if its process id (pid) is 1, then it is init by definition.

These days, things are not that simple anymore. Containerization creates situations where pid is 1, but the process runs, well, in a container. In Linux, this is realized by using a feature called “pid namespaces”. The clone(2) syscall can take the flag CLONE_NEWPID (“since Linux 2.6.24”), which puts the new process into a new pid namespace. This means that this process will have pid 1 inside the pid namespace, but outside (i.e. in the parent pid namespace), the process has a regular pid. Various Linux API transparently translate pids between these namespaces.

The pid namespaces form a hierarchy, and the one at the very top is called “initial pid namespace”.

You can use the tool unshare(1) to play with pid namespaces:

% unshare --fork --map-root-user --pid bash -c 'echo $$' 
1

This is a way to spawn (as a regular user!) a process that has pid 1, at least, that’s what it looks like to the process.

We can try to find some evidence that we’re a freshly booted init, but none of it is really conclusive:

  • Our user id is 0, we are root (necessary but not sufficient of course).
  • $TERM should be linux; trivial to override.
  • $BOOT_IMAGE is set, but this depends on the boot loader.
  • System uptime is “low”, but it takes the initrd boot time into account. Our non-root init could be spawned in a container at boot time.

There are also some indicators the process runs in a container using one of the popular solutions such as docker or podman:

  • The process has a lot of supplementary groups already.
  • If we were put inside a cgroup, reading /proc/1/cgroup will indicate it.
  • The file /.dockerenv exists.

But there are still situations, such as the unshare call above, where all of these things may not be true.

Therefore I tried to find the ultimate way to detect whether we are in the initial pid namespace.

I started to research this and quickly found the ioctl(2) NS_GET_PARENT which seemed to be useful: “Returns a file descriptor that refers to the parent namespace of the namespace referred to by fd.” However, it is useless for this purpose:

EPERM  The requested namespace is outside of the caller's
       namespace scope.  This error can occur if, for example,
       the owning user namespace is an ancestor of the caller's
       current user namespace.  It can also occur on attempts to
       obtain the parent of the initial user or PID namespace.

Of course, it makes a lot of sense that we cannot get a handle to the surrounding pid namespace, as this would make the encapsulation provided by namespaces futile. However, coalescing these two error conditions (namespace is outside the caller namespace, and namespace is initial pid namespace) doesn’t make our life easier.

So, we need to bring out bigger guns in. I searched the kernel source for occurrences of init_pid_ns, as this namespace is called in the Linux source code. There are not too many occurrences we can rely on. The taskstats module limits the TASKSTATS_CMD_ATTR_REGISTER_CPUMASK command to the initial pid namespace only, but to use this requires speaking the netlink interface, which is terrible. Also, the behavior could change in future versions.

One interesting, and viable approach, is this limitation of the reboot(2) syscall: only some LINUX_REBOOT_CMD_* commands are allowed to be sent inside a nested pid namespace. Now, we need to find a “harmless” command to call reboot(2) with to test this! (Obviously, only being able to suspend the machine from the initial pid namespace is not a very useful check…) There are two commands that do not do much harm: LINUX_REBOOT_CMD_CAD_{ON,OFF} will toggle the action that Ctrl-Alt-Delete performs. Unfortunately, it is impossible to read the state of this flag, making this test a destructive operation still. (But if you are pid 1, you may want to set it anyway, so you get pid namespace detection for free.)

So I kept looking for other ways until I realized there’s a quite natural property to check for, and that is to find out if there are kernel threads in the pid namespace. Kernel threads are spawned by the kernel in the initial pid namespace and help perform certain asynchronous actions the kernel has to do, subject to process scheduling. As far as I know, kernel threads never occur in a nested pid namespace, and at least the parent process of kernel threads, kthreadd, will always exist. Conveniently, it also always has pid 2.

Thus, we just need to figure out if pid 2 is a kernel thread! Note that just checking whether pid 2 exists is cheap, but racy: the container runtime could have spawned another process before we are scheduled to do the check, and this process will as well get pid 2 then.

Luckily, kernel threads have quite a few special properties, that are of different difficulty to check from a C program:

  • /proc/PID/cmdline is empty (not a good indicator, user space processes can clear it too).
  • kernel threads have parent pid 0 (requires parsing /proc/PID/stat, which everyone gets wrong the first time, or /proc/PID/status).
  • kernel threads have no Vm* data in /proc/PID/status.
  • kernel threads have the flag PF_KTHREAD set (requires parsing /proc/PID/stat again).
  • kernel threads have an empty symlink for /proc/PID/exe.

I decided to go with the latter. On Linux, empty symlinks are impossible to create as a user, so we just need to check that and we’re done, right?

On a regular file system, using lstat(2) would have filled st_size with the length of the symlink. But on a procfs, lstat is not to be trusted, and even non-empty symlinks have st_size equal to 0. We thus really need to use the readlink(2) syscall to read the link. After doing this, you will notice that it returns ENOENT… exactly the same as if pid 2 did not exist!

We therefore need another check, to verify that pid 2 does exist. Luckily, here a lstat on /proc/2/exe file is fine. It must return zero.

Note that you need to do these operations in exactly this order, else you are subject to race conditions again: the only reason this works is that if pid 2 is kthreadd, it will not have terminated before the lstat check (because it cannot terminate).

[Addendum 2023-09-17: vmann points out that this is still racy: a container can spawn a new pid 2 between the lstat and the readlink call. Please use one of the more complicated approaches mentioned above!]

Therefore, readlink(2) failing with ENOENT and lstat(2) succeeding is exactly the combination required to check pid 2 is kthreadd, which implies there are kernel threads in our pid namespace, which implies that we are in the initial namespace.

Phew, this went deeper than expected.

NP: David Bowie—Lazarus

24dec2021 · Merry Christmas!

Picture of a cat in front of a christmas tree

Frohe Weihnachten, ein schönes Fest, und einen guten Rutsch ins neue Jahr wünscht euch
Leah Neukirchen

Merry Christmas and a Happy New Year!

NP: Sade—Keep Looking

10dec2021 · Surveying lava basins with BQN and fixpoints

Yesterday, Advent of Code had an interesting problem: given the heightmap of a lava cave, compute the lowest points and the size of their basins (connected regions).

Let’s do this in BQN again, as this problem teaches some good ways to think in array languages.

First, let’s load the input data into a matrix:

   d ← > '0' -˜ •FLines"day09"

We subtract the ASCII lines from the character 0 to get numerical rows. The merge function (>) then converts this list-of-lists into a 2-dimensional array. For the sample data, we get:

┌─
╵ 2 1 9 9 9 4 3 2 1 0
  3 9 8 7 8 9 4 9 2 1
  9 8 5 6 7 8 9 8 9 2
  8 7 6 7 8 9 6 7 8 9
  9 8 9 9 9 6 5 6 7 8
                      ┘

A low point is a point that is lower than every orthogonally adjacent point. Thanks to array progamming, we can solve this for the whole matrix at once without any loops!

The core idea is to shift the array into each cardinal direction, and then compute the minimum of these arrays. If the original array is smaller then the array of the minimums, it’s a low point.

By default, shifting («, ») in BQN inserts zeroes for numerical arrays. But since we are looking for the minimum, we need to shift in a value that is higher than any. We can simply use .

So, to shift in from the left, we use:

   ∞»˘d
┌─
╵ ∞ 2 1 9 9 9 4 3 2 1
  ∞ 3 9 8 7 8 9 4 9 2
  ∞ 9 8 5 6 7 8 9 8 9
  ∞ 8 7 6 7 8 9 6 7 8
  ∞ 9 8 9 9 9 6 5 6 7
                      ┘

Unfortunately, shifting from the top is not so easy:

   ∞»d
Error: shift: =𝕨 must be =𝕩 or ¯1+=𝕩 (0≡=𝕨, 2≡=𝕩)
at ∞»d

We need would need to make a list of long enough to agree with the array width:

   (∞¨⊏d)»d
┌─
╵ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
  2 1 9 9 9 4 3 2 1 0
  3 9 8 7 8 9 4 9 2 1
  9 8 5 6 7 8 9 8 9 2
  8 7 6 7 8 9 6 7 8 9
                      ┘

However, since we need to do this on every side, we can also look at the problem differently: we shift in from the left under rotation by 0, 90, 180, 270 degrees.

How do we “rotate” a matrix? We reverse the rows () and then transpose () it.

   ⍉⌽d
┌─
╵ 9 8 9 3 2
  8 7 8 9 1
  9 6 5 8 9
  9 7 6 7 9
  9 8 7 8 9
  6 9 8 9 4
  5 6 9 4 3
  6 7 8 9 2
  7 8 9 2 1
  8 9 2 1 0
            ┘

By using the repeat modifier () we can easily rotate several times.

   (⍉∘⌽⍟(↕4)) d
┌─
· ┌─                      ┌─            ┌─                      ┌─
  ╵ 2 1 9 9 9 4 3 2 1 0   ╵ 9 8 9 3 2   ╵ 8 7 6 5 6 9 9 9 8 9   ╵ 0 1 2 9 8
    3 9 8 7 8 9 4 9 2 1     8 7 8 9 1     9 8 7 6 9 8 7 6 7 8     1 2 9 8 7
    9 8 5 6 7 8 9 8 9 2     9 6 5 8 9     2 9 8 9 8 7 6 5 8 9     2 9 8 7 6
    8 7 6 7 8 9 6 7 8 9     9 7 6 7 9     1 2 9 4 9 8 7 8 9 3     3 4 9 6 5
    9 8 9 9 9 6 5 6 7 8     9 8 7 8 9     0 1 2 3 4 9 9 9 1 2     4 9 8 9 6
                        ┘   6 9 8 9 4                         ┘   9 8 7 8 9
                            5 6 9 4 3                             9 7 6 7 9
                            6 7 8 9 2                             9 8 5 6 9
                            7 8 9 2 1                             1 9 8 7 8
                            8 9 2 1 0                             2 3 9 8 9
                                      ┘                                     ┘
                                                                              ┘

Finally, we perform the shift operation under () the rotation, that is, BQN rotates the array, does the shift, and knows how to undo the rotation!

┌─
· ┌─                      ┌─                      ┌─                      ┌─
  ╵ ∞ 2 1 9 9 9 4 3 2 1   ╵ 3 9 8 7 8 9 4 9 2 1   ╵ 1 9 9 9 4 3 2 1 0 ∞   ╵ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞
    ∞ 3 9 8 7 8 9 4 9 2     9 8 5 6 7 8 9 8 9 2     9 8 7 8 9 4 9 2 1 ∞     2 1 9 9 9 4 3 2 1 0
    ∞ 9 8 5 6 7 8 9 8 9     8 7 6 7 8 9 6 7 8 9     8 5 6 7 8 9 8 9 2 ∞     3 9 8 7 8 9 4 9 2 1
    ∞ 8 7 6 7 8 9 6 7 8     9 8 9 9 9 6 5 6 7 8     7 6 7 8 9 6 7 8 9 ∞     9 8 5 6 7 8 9 8 9 2
    ∞ 9 8 9 9 9 6 5 6 7     ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞     8 9 9 9 6 5 6 7 8 ∞     8 7 6 7 8 9 6 7 8 9
                        ┘                       ┘                       ┘                       ┘
                                                                                                  ┘

Now we insert (´) the minimum function () between these arrays and compute the minimum at each position:

   ⌊´{∞⊸»˘⌾(⍉∘⌽⍟𝕩)d}¨↕4
┌─
╵ 1 2 1 7 4 3 2 1 0 1
  2 1 5 6 7 4 3 2 1 0
  3 5 6 5 6 7 4 7 2 1
  7 6 5 6 7 6 5 6 7 2
  8 7 6 7 6 5 6 5 6 7
                      ┘

The positions where the original array d is still smaller are the low points, and we store them for part 2.

   l ← d < ⌊´{∞⊸»˘⌾(⍉∘⌽⍟𝕩)d}¨↕4
┌─
╵ 0 1 0 0 0 0 0 0 0 1
  0 0 0 0 0 0 0 0 0 0
  0 0 1 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 1 0 0 0
                      ┘

To finish part 1, we need to compute the risk level for each low point, which is 1 plus the height. So compute that:

   (1+d)×l
┌─
╵ 0 2 0 0 0 0 0 0 0 1
  0 0 0 0 0 0 0 0 0 0
  0 0 6 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 6 0 0 0
                      ┘

Finally, we compute the end result by deshaping () the array into a single long list and summing it up:

   +´⥊(1+d)×l
15

This concludes part 1.

Part 2 is less straight forward. We need to compute the basins around every low point, which is the area limited by the points of height 9.

Since we need to compute the size for each basin in the end, we need to know which point belongs to which basin. To get started, we first give each low point a unique number. One way to do this is to assign an index to each position and just use those that are used in the low point array.

In BQN, the shape () of an array is the list of sizes for each axis:

   ≢d
⟨ 5 10 ⟩

We count up to the product of these values and add one (to avoid numbering a basin with 0, which will be used for the basin limits). Then, we multiply this with the array of low points so all ones in it get turned into a unique basin index. We perform the multiplication under deshape, so we keep the shape of the input data:

   s ← (1+↕×´≢d) ×⌾⥊ l
┌─
╵ 0 2  0 0 0 0  0 0 0 10
  0 0  0 0 0 0  0 0 0  0
  0 0 23 0 0 0  0 0 0  0
  0 0  0 0 0 0  0 0 0  0
  0 0  0 0 0 0 47 0 0  0
                         ┘

Here’s the main idea how to solve part 2: we incrementally grow these areas by adding their neighbors until the whole array is filled. For one step, we do this with a function Rise:

   Rise ← (d≠9)⊸×(»⌈«⌈«˘⌈»˘⌈⊣)

Here, shifting in zeroes is good enough, so we can use the monadic shift functions. The train at the end computes the maximum () of the four shifted versions and the array itself (). We multiply it with a matrix that is 0 where the depth is 9, so the basin limits will be constantly zero.

Let’s run it once and twice to see how it works:

   Rise s
┌─
╵ 2  2  0  0 0  0  0  0 10 10
  0  0 23  0 0  0  0  0  0 10
  0 23 23 23 0  0  0  0  0  0
  0  0 23  0 0  0 47  0  0  0
  0  0  0  0 0 47 47 47  0  0
                              ┘
   Rise⍟2 s
┌─
╵ 2  2  0  0  0  0  0 10 10 10
  2  0 23 23  0  0  0  0 10 10
  0 23 23 23 23  0  0  0  0 10
  0 23 23 23  0  0 47 47  0  0
  0  0  0  0  0 47 47 47 47  0
                               ┘

As you can see, row 1 colum 3 does not get filled by basin #2 since it has height 9.

Now we could just iterate this step a often enough, or actually only until we reach a fixpoint; that is, applying Rise again doesn’t change the value anymore.

A simple way to implement a fixpoint operator is

_fix ← { 𝕩 ≡ 𝔽 𝕩 ? 𝕩 ; 𝕊 𝔽 𝕩 }

Let’s compute the filled map:

   Rise _fix s
┌─
╵  2  2  0  0  0 10 10 10 10 10
   2  0 23 23 23  0 10  0 10 10
   0 23 23 23 23 23  0 47  0 10
  23 23 23 23 23  0 47 47 47  0
   0 23  0  0  0 47 47 47 47 47
                                ┘

Now, we just need to compute the sizes of the basins, which means computing a histogram of the basin numbers. We deshape the array again, and only keep all values bigger than zero:

   m ← ⥊ Rise _fix s
   (m>0)/m
⟨ 2 2 10 10 10 10 10 2 23 23 23 10 10 10 23 23 23 23 23 47 10 23 23 23 23 23 47 47 47 23 47 47 47 47 47 ⟩

With the nice group indices function (), we can count how often each value appears:

   ≠¨⊔(m>0)/m
⟨ 0 0 3 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 ⟩

Finally, let’s compute the three largest values by sorting in descending order () and taking the first three entries ():

   3↑∨≠¨⊔(m>0)/m
⟨ 14 9 9 ⟩

We then multiply this together and get the result for part 2:

   ×´3↑∨≠¨⊔(m>0)/m
1134

NP: The New Basement Tapes—Quick Like a Flash

07dec2021 · Counting lanternfish with BQN and linear algebra

Yesterday, Advent of Code had an interesting problem: modeling the growth rate of lanternfish. Read the link for the full details, but the core algorithm is this:

Each day, a 0 becomes a 6 and adds a new 8 to the end of the list, while each other number decreases by 1 if it was present at the start of the day.

For part 1, you need to compute the total number of lanternfish after 80 days, and for part 2 after 256 days. While the first part is possible with a native list based version (around 300k), the second part yields over 1.6 trillion for my input, so storing every lanternfish on its own is infeasible.

Luckily, with a bit of thinking we quickly realize that the position and order of the lanternfish is irrelevant and we can thus just count how many at each age we have and sum them up at the end for the answer.

I decided to tackle this problem using the array language BQN and will try to explain how it works.

First, let’s convert the sample input data into a histogram: For reasons that will be obvious in a minute, we compare each input element with the list of numbers from 0 to 8 (↕9).

   input ← ⟨3,4,3,1,2⟩
   input =⌜ ↕9
┌─
╵ 0 0 0 1 0 0 0 0 0
  0 0 0 0 1 0 0 0 0
  0 0 0 1 0 0 0 0 0
  0 1 0 0 0 0 0 0 0
  0 0 1 0 0 0 0 0 0
                    ┘

If we sum this table by column now, we have a count which age appears how often (note that ages over 8 never happen):

   d ← +˝ input =⌜ ↕9
⟨ 0 1 1 2 1 0 0 0 0 ⟩

Next, we need to figure out what happens in a step. We can model one part (number decreases by 1, or becomes 8 when it was 0) by rotating () the array to the left:

   1 ⌽ d
⟨ 1 1 2 1 0 0 0 0 0 ⟩
   2 ⌽ d
⟨ 1 2 1 0 0 0 0 0 1 ⟩

But we also need to add the first element to column 6, e.g. by adding an appropriately scaled vector. First, the vector, which we get by taking the numbers of 0 to 8 and comparing them to 6:

   6=↕9
⟨ 0 0 0 0 0 0 1 0 0 ⟩

Now we can scale this vector by the first element ():

   (6=↕9) × ⊑ ⟨9,8,7,6,5,4,3,2,1⟩
⟨ 0 0 0 0 0 0 9 0 0 ⟩

Finally, we put both things together. We can use a hook if we bind () the argument of the rotation to the operator:

Step ← 1⊸⌽ + (6=↕9)×⊑

Now, we simply need to repeat () this step 80 (resp. 256) times and count up the results to solve the puzzle!

   +´ Step⍟80 d
5934
   +´ Step⍟256 d
26984457539

This agrees with the provided example.

Of course, computing 256 steps is very fast, but we can wonder if there is not a more efficient solution. Let’s say we wanted to compute many lanternfish populations quickly.

The key realization is that Step is a linear function, which means:

(n × Step v) ≡ (Step n × v)
(Step v) + (Step w) ≡ (Step v + w)

We can thus compute Step for every basis vector of our 9-dimensional histogram space; let’s take the unit vectors here:

   step256 ← {+´Step⍟256 𝕩}˘ =⌜˜↕9
⟨ 6703087164 6206821033 5617089148 5217223242 4726100874
  4368232009 3989468462 3649885552 3369186778 ⟩

And now we can solve part 2 for any input by using a dot-product (pairwise multiplication followed by addition):

   step256 +´∘× d
26984457539

This yields the same result as above, but only does 9 multiplications and 9 additions.

We can also compute step256 faster by using a bit of matrix theory. Since we know the operation of step on the basis vectors, we can compute the matrix m corresponding to this linear operator:

   m ← Step˘ =⌜˜↕9
┌─
╵ 0 0 0 0 0 0 1 0 1
  1 0 0 0 0 0 0 0 0
  0 1 0 0 0 0 0 0 0
  0 0 1 0 0 0 0 0 0
  0 0 0 1 0 0 0 0 0
  0 0 0 0 1 0 0 0 0
  0 0 0 0 0 1 0 0 0
  0 0 0 0 0 0 1 0 0
  0 0 0 0 0 0 0 1 0
                    ┘

Note how we just applied Step for every column of the identity matrix.

We can solve the problem the same way now by computing the 256th power of the matrix and multiplying it to the histogram vector and summing up again.

First, we need the matrix product:

   MP ← +˝∘×⎉1‿∞

Naively, we can compute the power like this (we already have m to the power of 1!):

   m256 ← m MP⍟255 m

Now multiply d to it, and sum up:

   +´ m256 +˝∘× d
26984457539

Note that step256 is just the sum of rows, so we can precompute that:

   step256 ≡ +˝˘ m256
1

If you paid attention, you may now wonder why we replaced 256 row operations with 255 matrix multiplications and claim it’s faster to compute. We need an additional trick: fast matrix exponentiation. Instead of 255 matrix multiplications we just need 8 matrix squarings with itself to compute the 256th power:

   m256 ≡ MP˜⍟8 m
1

This yields an asymptotically faster algorithm to compute the step vector for higher powers. For non-powers of two you need to implement a square-and-multiply algorithm, or look up the optimal result. This is left as an exercise to the reader.

For day 80 we can use only 7 matrix multiplications:

   (MP˜⍟4 (m MP MP˜⍟2 m)) ≡ (m MP⍟79 m)
1

In theory, you could optimize this even further and compute a closed form from the eigenvectors of m, but you’ll end up with nonic roots and I have not found a way to compute them precise enough to be useful. So let’s leave it at that for now.

NP: Slade—My Oh My

20jan2021 · Remembering the work of David M. Tilbrook and the QED editor

Last week, I learned that David M. Tilbrook has died, which made me sad. I did not know him personally and cannot say much about his life, but I studied his publications and software ideas a lot, and consider them interesting, especially from a historic perspective.

Unfortunately, most of these things are on websites taken down already, so this post will refer to pages on the Internet Archive extensively. [Update 2020-01-16: His son Joe Tilbrook sent me a mail stating that http://qef.com/ is up again.]

I first came across David when I researched the history of the QED editor, the direct predecessor of the Unix standard text editor ed. The history of QED is well documented by Dennis Ritchie. However, for a long time the QED source code was hard to find, and I heard that David still maintained a version.

Around 2016, I swooped a copy of Caltech qed from USENIX tape 80.1 and tried making it work on modern platforms, with moderate success. Thanks to efforts by Arnold Robbins there is now an QED Archive which also features a copy of 1992 QED from Toronto, which contains contributions by Tom Duff, Robert Pike, Hugh Redelmeier and David Tilbrook. If you want to run it yourself, there is a modernized, UTF-8 aware version available now!

I do not know what David exactly contributed to QED, but from his writings its clear he was a heavy user of QED and wrote many scripts in it. Remember that in the early 80’s, awk was quite limited and Perl did not exist, so general-purpose string processing on Unix was difficult. We will have an example of a small QED script at the end of this post.

David’s opus magnum was a suite of tools called QEF, quod erat faciendum. Euclid wrote this at the end of geometric constructions, and in a software sense, we want a build system to produce what was to be made. At its time of creation, David was responsible for maintaining a (for the time) large system, essentially a Unix distribution. Tooling was stuck in the era of 1977’s make(1). For the details and basic ideas, see Tilbrook and Place (1986), “Tools for the Maintenance and Installation of a Large Software Distribution” (Huge thanks to Alan Grosskurth for making a copy available.) My favorite footnote is the one about their Prolog prototype of a build system: “Is the cray free? I need to reinstall /bin/true!”

Back then, Unix software development was plagued with all kinds of incompatibilities and vendor-specific workarounds, and QEF was one of the first tools to provide concepts like out-of-tree builds, automatic dependency detection, and provided portable abstractions for things like creating shared libraries, which required performing occult rituals back then.

The QEF whitepaper from 1996 explains the system at a more developed state.

What is intriguing is that the whole toolkit is created in classic Unix manner from small tools and little languages. I acquired an evaluation copy of QEF in 2015, but sadly it had no copy of QED included. However, I could read the full manpages for many of his tools.

Looking at these tools was instructive; many are obsoleted now because the features have been added to standard tools, or we can afford using more general, inefficient tools now. But for example, his go tool directly influenced the user interface of my nq tool, and reading about rls, lls, fexists, and ftest inspired my lr. His rpl lives on as my mend.

There are also additional resources and talks by David worth checking out.

Well, I promised you some QED code earlier, so let’s do fizzbuzz. (I heard this is useful in job interviews, so next time why not ask whether you can use QED for the task!)

It turns out this does not get as obscure as expected, but I’m not claiming what I wrote is idiomatic QED. I hacked it together while studying the tutorial a little bit.

ba
"loop
zc#+1

a\N\N.
za#:\zc%3=0 yf s/$/fizz/
za#:\zc%5=0 yf s/$/buzz/
s/^$/\zc/

zc#=100 yf`loop

,p
Q

We can run this using the convenient -x flag:

% qed -x fizzbuzz.qed
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
16
    ...

So, how does it work? First, we switch to buffer a of QED’s 56 available buffers (ba). By default we are in the script buffer named ~ due to the -x flag, which made me scratch my head in the beginning because the script always printed its own source at first!

Next, we set up a loop. QED provides structured programming in the form of a while loop, but that requires fitting the loop body on the same line. Instead, we will use jumps. We start with a label "loop and increment the c register using zc#+1. The # enables arithmetic mode for registers, they also can be strings. Since the register is empty, it will be regarded as zero. The end of the loop is the zc#=100 yf`loop line, which checks whether the c register equals 100, and if not (yf) jumps back to the label loop. Curiously, QED has different commands for jumping forwards (') and backwards (`). Having explicit directions is, I assume, an implementation detail of the buffer-based program storage, but compare with Unix’s goto or TECO’s O command, which both search from the top.

Inside the loop, we first append an empty line (a\N\N.) This works like in ed:

a

.

But QED allows using \N as a newline, so we can save some screen space.

Then, we need to do the fizzbuzz logic. The command za#:\zc%3=0 will copy the c register to the a register (we need to do this, as all arithmetic operations store their result in a register), then we take the a register modulo 3 and check if the register is then zero. If it isn’t, we jump a line forward (yf). If we didn’t jump, we append fizz to the line, using the s command (pretty much like in ed).

We deal with the buzz case the same way, but modulo 5.

If the line is still empty, we have to say the number, so we append it using s again and this time insert the c register into the input stream using \zc. Note that this not interpolated by the s command, but by QED itself. You can write \zc (or \N) essentially everywhere and it will work as if you’d have typed it in! A \ can be quoted as \c, so we can do this:

za:foo\czb
zb:bar
a
\za
.
p
foobar

Note how the insertion of the a register resulted in the b register being inserted as well! Luckily, the people involved with QED did better when they wrote shells later.

At the end of the loop, we have the fizzbuzz output in the buffer, so we can just print out the whole thing (,p) and quit QED with no questions asked (Q).

That wasn’t so bad, was it? Just like the tutorial says:

By striking a harmonious balance between Qed and UNIX’s other tools, the intelligent user will find Qed powerful, flexible, easy to master and fun!

NP: Talking Heads—Seen And Not Seen

24dec2020 · Merry Christmas!

Frohe Weihnachten, ein schönes Fest, und einen guten Rutsch ins neue Jahr wünscht euch
Leah Neukirchen

Merry Christmas and a Happy New Year!

NP: Elvis Perkins—It's A Sad World After All

Copyright © 2004–2022