What is homoiconicity then? Typical definitions state that it is simply “code as data”, will point to a relationship between a program’s structure and syntax or note that the program source is expressed in a primitive data-type of the language. In the below, we will show that none of these definitions make much sense.
Before that, however, we’ll return to the original definition, to ensure we have at least some sensible frame of reference for the rest of the article. This is also the definition we’ll return to in the final section, when we put it to the test by examining the canonical example of homoiconicity.
The term homoiconic was first coined by Calvin Mooers in the article TRAC, A Text-Handling Language (L. Peter Deutsch was the other author of that article, but he has made it clear to me that Mooers was the originator of the term and Deutsch’s contribution was only on the technical side).1 In the original article, the meaning of homoiconicity seems to be spelled out quite clearly. The article opens by stating that the TRAC language is homoiconic, although without yet using that term explicitly:
The external and internal forms of the TRAC language are the same.
Before introducing the term itself, the importance of a single representation for viewing and manipulation by the user and interpretation by the computer is repeated no less than 5 times; numbered here in square brackets for clarity:
One of the main design goals was  that the input script of TRAC (what is typed in by the user) should be identical to the text which guides the internal action of the TRAC processor. In other words,  TRAC procedures should be stored in memory as a string of characters exactly as the user typed them at the keyboard.  If the TRAC procedures themselves evolve new procedures, these new procedures should also be stated in the same script. [..]  At any time, it should be possible to display program or procedural information in the same form as the TRAC processor will act upon it during its execution.  It is desirable that the internal character code representation be identical to, or very similar to, the external code representation.
This paragraph finally concludes with the definition itself – seemingly leaving no room for doubt:
Because TRAC procedures and text have the same representation inside and outside the processor, the term homo-iconic is applicable, from homo meaning the same, and icon meaning representation.
Unfortunately, this rather straightforward definition is not the only one in active use. There is in fact a proliferation of alternative, and entirely misguided, definitions, some of which are quite persistent. It is worth pointing out explicitly why each of them is misguided: if homoiconicity is to have any meaning at all, it’s certainly a good idea to point out what it doesn’t mean.
“Code as data”
First, any attempt to explain homoiconicity by using the phrase “Code as data” is quite meaningless. The reason is simply that “Code as data” can be used to indicate any of a large number of quite distinct ideas, some of which are listed below:
- Functions as a first-class citizen.
- The fact that a computer in a Von Neumann architecture stores programs and data in the same memory device.
- Reflection and metaprogramming.
The punchline is in the last bullet in the list: if homoiconicity is “code as data”, and “code as data” is homoiconicity, we’ve simply created a circular definition and explained nothing at all – but with extra potential for confusion because “code as data” may also refer to other ideas.
Program structure & syntax
The second definition attempts to draw a connection between the program’s structure and its syntax. For example, there’s this formulation which is currently featuring on Wikipedia:
Homoiconicity [..] is a property [..] in which the program structure is similar to its syntax
This is a category error, that is, a similarity is drawn between ideas that exist at two separate levels of abstraction.
Syntax makes a statement about program structure, that is the syntax of a language is the set of rules that defines the combinations of symbols that are considered to be a correctly structured program.
To say that the rules defining the program’s structure are similar to the program’s structure makes no sense. First, since a single syntax defines a potentially infinite set of correctly structured programs, which program are we talking about as a reference for comparison? Second, what would the measure of similarity across these 2 levels of abstraction be?
Similar confusion surrounding the terminology of syntax and structure can be found further down in the same article:
If a language is homoiconic, it means that the language text has the same structure as its abstract syntax tree (AST)
This is also wrong, but for the opposite reason: the abstract syntax tree is by definition a representation of the structure of the language text. This is the case for any AST, in any language. Thus, to say that this has anything to do with homoiconicity is quite meaningless.
Third, the property of homoiconicity is often conflated with the existence of a meta-circular evaluator, but the two concepts are entirely separate. An example of such confusion is the following quote from the Wikipedia article:
A typical demonstration of homoiconicity is the meta-circular evaluator.
To understand this claim, let’s first examine what a meta-circular evaluator is. The term was coined originally in 1972 by John C. Reynolds in his paper Definitional Interpreters for Higher-Order Programming Languages.2
We have coined the word “meta-circular” to indicate the basic character of this interpreter: It defines each feature of the defined language by using the corresponding feature of the defining language. For example, when eval is applied to an application expression [..] of the defined language, it evaluates an application expression [..] in the defining language.
The relationship with homoiconicity, if it exists, is accidental at most.
It is quite possible to construct a meta-circular interpreter in languages which are clearly not homoiconic. Consider for example a Python-interpreter, written in Python, which implements each language-construct from Python in Python: this satisfies the definition of a meta-circular evaluator. However, Python is not homoiconic, because the internal and external forms of the language are quite different: the external representation of Python programs is the text file, and the internal form, on which the interpreter operates, is bytecode.
Primitive data types
Another often seen definition of homoiconicity focuses on the relationship between the primitive data types in the language and the representation of the language. Again, we quote from Wikipedia:
In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself.
Again, this is a definition which doesn’t hold up to any scrutiny.
First, consider that most languages have a primitive datatype for strings of text, and the programs in most languages are presented to the user as a string of text. Should we conclude from this that most languages are homoiconic?
Second, for most languages, it is possible to write a parser in that language itself, that stores the resulting abstract syntax tree in a primitive type in the language. For example: we can write a parser in Java that parses Java source code into Java objects, which are primitive data types in the language. Should we conclude from this that Java is homoiconic?
In both cases, the answer is clearly “no” – if all languages are homoiconic, the terms loses all meaning.
Now we know what homoiconicity is not, the way is cleared for a discussion on what it is. Let’s return to the original definition, and try to fit this to an example. Remember, the original definition of homoiconicity centers on a similarity between the internal and external representations of a language. In the literature, Lisps are the favorite example of homoiconicity; we will stick with that tradition here, and examine the case of an arbitrary Lisp program in an arbitray language in the Lisp family.3
First, the external representation: the basic syntactical element of Lisps is the s-expression. Thus, a typical program in Lisp is represented to the programmer as nothing more than an s-expression.
Second, the internal representation. The evaluation of a Lisp program is typically defined in terms of those s-expressions directly. One example is the definition of Scheme given by Abelson & Sussman, another is the definition given on this site. In both cases, the semantics of the language are given in terms of a case-analysis on an s-expression. Thus, the (virtual) machine operates on s-expressions directly, and we can say the internal representation of the program is an s-expression.
Because the internal and external representations are the same, we might say that this combination of language and interpreter is homoiconic.
The observant reader may raise at least two objections against the above.
The first objection concerns the external representation. In the above we simply stated that the external representation is an s-expression. In most practical programming environments, however, the actual representation of program sources is as text files which contain strings of characters. It is only after parsing this text that the representation is really an s-expression. In other words: in practical environments the external representation is not an s-expression, but text.
The second objection concerns the internal representation. Practical implementations of Lisp interpreters do generally not operate actually directly on s-expressions internally for performance reasons. Even though a Lisp might be defined in terms of a case-analysis on s-expressions, it is not usually implemented as such. Thus, the internal representation is not actually an s-expression in practice.
In an attempt to counter these arguments, we might slightly alter our definition, to state that homoiconicity is not a boolean property, but a scalar one: a language in a given environment can be homoiconic to some degree.
In fact, the original definition by Mooers and Deutsch leaves some space for such an interpretation when it states that “the internal [..] representation be identical to, or very similar to, the external [..] representation.” (emphasis mine).
A more precise term for such “partial homoiconicity”, could perhaps be para-iconicity or simula-iconicity. However, that would be introducing yet another term into an already confused vocabulary. In any case, given this scalar interpretation, we can see that the example above is indeed homoiconic to a very large degree:
Regarding the external representation: parsing of s-expressions is trivial, as is the reverse operation of converting s-expressions to some printable form. The use of explicit brackets for all list-expressions ensures that the structure of s-expressions is always explicitly visible. Thus, to say that the external representation of a Lisp program is an s-expression holds at least some truth.
Regarding the internal representation: even though an actual Lisp interpreter might be optimized, its operational semantics are defined in terms of a virtual machine that operates on s-expressions. Any properly implemented optimization will have the same behavior (barring performance) as its non-optimized counterpart. Thus, from the perspective of the programmer the internal representation is still the s-expression.4 Thus, to say that the internal representation is an s-expression holds some truth as well.
For comparison with a “less homoiconic” language, we could consider the example of compiled C: the external representation of a C program is as a piece of program text; the internal representation is in the form of machine code. Strings of text and machine code are quite dissimilar, and to map between them takes considerable effort.
Homoiconicity is a term surrounded by much confusion.
Some of this confusion can be cleared up, by putting aside misguided definitions and returning to the original definition: languages which have the same external and internal representations are homoiconic.
Unfortunately, that doesn’t resolve the issue entirely, because even when sticking to a seemingly sensible definition, we can see a lot of problems with the term: it is almost never the case that the external representation and the internal representation are exactly the same.
We are forced to withdraw to a position of “para-iconicity”: the idea that homoiconicity is a scalar property rather than a boolean one. In that view, a language which has at least some similarity between some external representation and some model for the internal representation could be said to be homoiconic.
However, that’s not a very good position to be in. I’d say it’s much easier to just talk about certain properties of your favorite language directly. For example, “the syntax is easy to parse”, or “the semantics of the language are completely defined on the single page of a book”. That is, there is much to be said in favor of Lisp’s s-expressions without resorting to use such a poorly defined term.
Maybe it’s time to stop saying “homoiconic”.
Afterword: L. Peter Deutsch responds
In June 2020, I reached out to L. Peter Deutsch with the request to read and comment on the above. He wrote the following:
I did not coin this word [homoiconic] – it was 100% the creation of my friend the late Calvin Mooers, who was the original conceiver of TRAC. My contribution in working with Calvin was to take his concept, turn it into a rigorous language definition, and implement it.
Having read through your blog article, I would say that I essentially agree with your argument. In my view, the root “icon” refers specifically to the concrete appearance of something. For example, “WYSIWYG” in the context of text editing is arguably the same idea as “homoiconic” in the context of programming languages, contrasted to languages like TeX where the document has two very different appearances (the TeX input file and the output document). WYSIWYG editors are homo-iconic not simply in the characters of the document, but also in their formatting (font, spacing, …); but I think the same concept applies.
There are other homoiconic macro languages: while TRAC may be the earliest on record (I don’t remember the date or specifications of Strachey’s GPM), m4, which I use actively as a preprocessor for other languages, also fits the concept.
A footnote in that article points out that this was “following suggestion of McCullough W.S., based upon terminology due to Peirce, C.S.” It appears that the “Peirce” in question is Charles Sander Peirce, who wrote extensively on semiotics, which the study of signs, although it is not clear whether and where he used the phrase homo-iconic explicitly. W.S. McCulloch is likely to be Warren Sturgis McCulloch, who was in Cambridge at the time. ↩
The definition in the influential Structure and Interpretation of Computer Programs, differs somewhat: “An evaluator that is written in the same language that it evaluates is said to be metacircular.” Although the constraint that each feature of the defined language is implemented in using the corresponding feature of the defined language is not repeated in that definition, the metacircular evaluator as defined in the book satisfies it in practice. ↩
In fact, the original article excludes Lisp as an example of homoiconicity, but only because Lisp had not settled on a single representation in terms of s-expressions at the time of writing: “Finally, LISP is troubled with a dual language problem: an M-language, which is easy to read, and is used externally, and an S-language, with which the LISP processor operates, and which is usable externally only by the hardened initiates. It should be noted here that were the S-language the only LISP language, LISP would be close to being homo-iconic (excluding the machine-language functions).” ↩
Taking this view to its ultimate consequence, however, raises even further questions around the concept of homoiconicity: for a well-encapsulated machine, we cannot observe its inner workings by definition; in that view, making any statement about the internal representation of the machine is meaningless. More generally, the original definition has the problem that the idea that there is a single external and a single internal representation of the program does not match with reality. In fact, there is a whole chain of representations, including electrons in the brain of the programmer, photons emitted from the screen, program text, machine code, and electrons moving in the CPU. ↩