6 February 2022

When you’re learning computer science, you typically learn that programming languages fall into two categories.

  • Interpreted languages are executed at the point they are read, by program called an interpreter.
  • Compiled languages are translated by a compiler program into a different language (typically machine code) which can be executed later.

Is this a sensible distinction? Does it make to classify languages, or even implementations, in this way?

If we take just the definitions above, we see they don’t really mean anything:

  • Any “interpreted language” can be compiled, by making a compiler which emits the interpreter bundled with the source program.
  • Any “compiled language” can be interpreted, by making an interpreter which compiles the program and then immediately runs it.

Ok, sure, but that’s not the only way interpreted and compiled languages differ in practice. For example, it’s possible to use an interpreted language interactively, in a read-eval-print loop. Compiled languages can’t do that.

But is that true? Some languages compile to machine code but can still be used interactively, like Haskell or Julia. The smallest unit of code is something that’s part of the language, not intrinsic to its implementation. Why does a language need to be piece-by-piece meaningful to qualify as interpreted? It’s not generally possible to make sense of C source code in smaller units than files, so if a C interpreter had this restriction, would it be somehow less interpreted?

I can’t think of any kind of language feature that, by necessity, requires it to be run at the time it’s read versus translated into a different form. Any possible answer appeals to convention about what is considered a valid compiled form (why isn’t an interpreter & source bundle a product of compilation?), or redefines “interpreted” to include the feature. If you can, please correct me!


Of course, interpreting and compiling are distinct processes. However, when we want to more deeply understand languages e.g. to develop our own, thinking of “interpreted” and “compiled” as well-defined language concepts can make it hard to conceptualise what’s really going on.

A much better way to understand and explain the distinction of features in practice is to consider a language’s minimal size of runtime.

I define “runtime” here to be a generic collection of code that provides essential features and is required to execute the processed input program.

We can think of a “compiled language” colloquially as one with a sufficiently small runtime that it can be practically distributed with the processed input program. Typically it’s some extra routines included inline in the machine code. For example, C’s runtime is very small and might just be code for moving parameters to and from the stack, and copying memory around.

On the other hand, a program in an interpreted language typically needs to have its much larger runtime, the interpreter, externally supplied.

We then understand that instead of a distinction, we actually have a smooth gradient, and language features move us up and down that gradient. Anything we can figure out in advance of the input program running is something we don’t have to do at run time, and hence reduces the minimal runtime size. Here are some examples:

  • Processing source code at run time, such as with an eval() function, means the runtime must include the whole implementation of the language.

  • If we can work out which specific functions/methods to call on instances of generic types or classes, we don’t need to generate a table for them, or include code to do the lookup at run time. (E.g. in C++ we can decide this per-method with the virtual keyword.)

  • Manipulating and inspecting types at run time requires code for them to be encoded as a value.

  • Generic code which can operate on a truly unknown type requires wrapping values with a type tag, and code to deal with tracking and unwrapping as necessary.

    However, some kinds of type genericity permit knowing all possibilities for a generic type in advance, which allows compiling specific, optimised code for each case.

    (Dynamically-typed languages can be thought of as actually statically typed, but with only one type: Value. Another false distinction?1)

  • If we know the sizes of everything in advance, we don’t need to include code to calculate how large a value is.

  • Structs have smaller runtime requirements than arbitrary dictionaries/maps: we know all the fields in advance, so we can work out how to access them in advance; field names are not known or usable at runtime, so we don’t need code to do a lookup by name.

  • Optimisations like constant folding can be seen as a kind of partial interpretation of the code, where values are computed according to the same rules as the final compiled form.

  • Exceptions require code for dynamically jumping around the call stack.


A lot of these tradeoffs involve making firm decisions about what is possible in the language. Certain things are done in advance, and other things are done at run time. Is there a way to be more intelligent about this? What if we could do things at either?

Languages often provide features that let us choose how dynamic we can be, like C++ virtual methods or Go interfaces. But there’s still a separation, and it requires choosing them in advance.

A few languages go extremely far in blurring the line. An example is Idris. Idris completely eliminates the type vs value distinction that occurs in most “compiled” languages, and lets you pass around types like any other value. It also has dependent types: types with value parameters, such as “array of length N.” This would be trivial to interpret dynamically; Idris is special because it tries to do as much as possible up front. Lots of very in-depth analysis is performed on input programs so that (a) errors can be detected early, and (b) the runtime can be made smaller and faster, by erasing checks and type information.23

In summary: for a deeper and more general understanding of programming languages, think less in terms of “interpreted vs compiled”, and more in terms of “big runtime vs small runtime.”

  1. See Dynamic Languages are Static Languages by Robert Harper. 

  2. See also cstheory: Dependent Types and Compile Time Types

  3. Idris is conceptually similar to various “automatic theory proving” systems used in mathematics, and the relation is clear: Idris needs to be able to automatically prove very complex things about arbitrary programs.