kdsch.org · about · rss

2019 Sep 11

Designing a Platform for Programming Language Experimentation

Roughly two years ago, I read a description of how a Forth interpreter works.

The Forth interpreter is very simple. It just does this over and over:

I remember the thrill of feeling that it was in reach. Two-hundred lines of Go later, I had written my first interpreter. Since then, the project branched out into a music composition language, an investigation of type systems (including Hindley-Milner type inference), a yearlong search for a Raymond Smullyan book, and most recently, a concatenative language based on Robert Kleffner’s “lambda-compose”. There were several blind alleys along the way. (Among them: trying to use propagation of constraints for type inference.)

Bear in mind that it was only in March of 2017 that I began my career as a professional software developer. I’ve had much to learn not only about the theory of programming languages, but how to build software. After development on Auklet wound down in the autumn of 2018, my dayjob turned to short, contract-based work for aimless clients, and my opportunity for mastery diminished. Fiddling with interpreters became my avenue for personal growth; the fun challenge I could keep up indefinitely.

I’ve written and abandoned more prototypes than I care to count; let’s estimate fifty. I’m beginning to get tired of rewriting certain things, which suggests that I’ve finally gotten a wide enough view of the domain that I can think about architecture. However, perhaps not very clearly.

Lambda-compose has done much to crystallize my efforts into a core idea. It’s not that I hadn’t read anything about the structure of compilers; MIT’s Structure and Interpretation of Computer Programs (SICP) and other publications taught me a lot about how lambda calculus-based languages work. The tricky part is that the concatenative language paradigm differs enough from the programming language canon (the applicative canon) that translation is necessary. Hence Kleffner’s paper, which establishes a theory of concatenative programming (albeit emphasizing the type system).



\

When you begin as a software engineer, you want to write good code. You make more experienced engineers into saints that exemplify practices of goodness (mine have been Rob Pike and, more recently, Rich Hickey and Ian Lance Taylor). After some time, you realize that “good” varies with context. It’s not always good to use to a more efficient algorithm, for instance, if it’s complex and you’re responsible for it working. It’s not always good to maximize test coverage, if it exerts design pressure that degrades efficiency or readability.

Navigating the many dogmas about how to write code is difficult. Sometimes our practice feels religious rather than scientific. But no person can simply choose their beliefs and doubts. The American pragmatist philosopher C.S. Peirce had science as a way to settle doubts. If a claim is not doubted, no amount of inquiry can make it more satisfying than it already is.

Thus, feel no shame if you’re a test-driven development (TDD) enthusiast or a functional programming zealot. That’s simply who you are today. Michael Feathers, despite his long advocacy of TDD, recently concluded that it’s one of many ways to achieve code quality. Belief in these dogmas isn’t bad: it’s a reflection of knowledge in a world of subjectivity and uncertainty. Such situated knowledge is all we’ve got.

Thus, there is plenty of room for a diversity of code quality approaches, just as much as there is for programming languages. And there’s an intersection between the two, where programming language design affects code quality. That’s precisely where I find myself.

Concatenative programming’s appeal is consistency and brevity. It thinks of programming as composing transformations of data. It’s similar to functional programming in this regard, but with values anonymous by default. Is the emphasis on composition rather than application worth anything? In what cases is it a good default?



\

I’m not the first to ask these questions, but I think the answers are incomplete. To aid inquiry, I’m developing a platform for language experimentation and collaboration.

The platform is designed to maximize extensibility. In fact, there’s very little content in the platform itself, as the intention is for language syntax to be specified by external packages and specified as a dependency at run time.

It sounds complex, but it’s very simple. To specify a language, you write three sets of rules:

The platform provides of a set of interfaces and reference implementations of components required by a lambda-compose machine. Specifically: a stack, an environment, and an expression. If you have these things, you can build a machine that runs any programming language based on lambda-compose, and perhaps a few more.

All of these components are separate from language syntax, which is the whole point. To make a specific programming language, implement the parser.Token and machine.Word interfaces with the desired syntax. In other words: define the parsing and reduction rules for your language.

Lexing is one area where I have less architectural clarity, but as none of these components are coupled, that’s inconsequential. Nevertheless, flexibility in lexers isn’t as important to me.

The goal is that with a well-designed platform, new aspects of language implementation, such as syntax processing and evaluation, can be developed without depending on each other. That should make it simple to combine implementations and evaluate performance.

Not all decisions have been factored out. The original lambda-compose uses substitution to deal with variable bindings. While this is elegant, I expect it to have linear complexity in practice, and have not yet found a way around that. Thus, I opted to use environments, which I expect to be faster. Moreover, the environment implementation can be swapped out.

Things that would be possible with the platform:

As yet there is no I/O facility.


Feedback

Discuss this page by emailing my public inbox. Please note the etiquette guidelines.

© 2024 Karl Schultheisz — source