Monday, February 25, 2008

In praise of mandatory indentation for novice programmers

About four years ago, I created my own programming language for teaching. I'll probably write more about this language at some other time, but for now I want to focus on one feature of the language: the use of mandatory indentation. My experience with this aspect of the language has been so overwhelmingly positive that I will never again voluntarily use a language without mandatory indentation for teaching novice programmers.

Of course, sometimes the choice of language is not under my control. Even when it is, there are always many different factors that go into that choice. But no other single factor I've run across has greater significance. For example, programming language afficionados spend endless hours arguing about static vs dynamic typing, or functional vs object-oriented languages, or strict vs lazy evaluation, or...you get the idea. Those differences can indeed be important, but more so for experienced programmers working on large projects than for novice programmers working on classroom projects. None of these differences individually comes close to the issue of indentation.

I say this with some pain, because I'm a programming languages guy myself. I've taken part in some of those arguments, and spent many hours contemplating the relative merits of many much deeper programming language properites. It hurts me to say that something so shallow as requiring a few extra spaces can have a bigger effect than, say, Hindley-Milner type inference. I wish it weren't so, but that is what my classroom experience tells me, loudly and unambiguously.

Why not mandatory indentation?

The vast majority of languages don't make indentation mandatory. Instead, they usually use explicit syntax to indicate block structure, such as { and }, or BEGIN and END. Yet, if you look at well-written programs in those languages, they are almost always indented sensibly. Furthermore, there's remarkably little disagreement as to what “sensible” indentation looks like. So why not make that sensible indentation mandatory? There are several reasons that are often put forth:

  • It's weird. Because the vast majority of languages don't use it, most programmers aren't used to the idea. Therefore, there's an initial sense of unease.
  • It messes up the scanner/parser. True, mandatory indentation is harder to deal with using traditional scanners and parsers based strictly on regular expressions and context-free grammars, respectively. But it's usually trivial to modify the scanner to keep track of indentation and issue an INDENT token when indentation increases, and one or more OUTDENT tokens when indentation decreases. The parser can then treat these tokens just like normal BEGIN/END keywords. In this approach the scanner is no longer based strictly on regular expressions, but most scanners aren't anyway (for example, when dealing with nested comments). Using the INDENT/OUTDENT tokens, the parser can still be based strictly on context-free grammars.
  • Don't try to take away my freedom! Programmers are a pretty libertarian bunch. Anytime somebody tries to impose rules that they follow 99% of the time anyway, they always focus on the 1% exceptions. For indentation, these exceptions often involve what to do with lines that are too long. So yeah, a language with mandatory indentation shoud deal gracefully with that issue. Or sometimes the exceptions involve code that is nested 20 levels deep. But these cases are almost always easy to rewrite into an equivalent but shallower structure. One place where I tend to deliberately break indentation rules is with temporary debugging output. I often leave such print statements unindented, so that they're easier to find when it's time to take them out. This is convenient, but I can certainly live without it.
  • I don't want people to be able to read my code! Maybe some people view obfuscated code as job security. As a different example, the former champion in the TopCoder programming contest, John Dethridge, was famous for never indenting. Why? Because in TopCoder, there is a “challenge” phase, where other competitors look at your code and try to find bugs. So there's an incentive to make your code hard for other competitors to understand. I remember teasing him about this once, and he said laughingly “Beware my left-justified fury!” I replied that I'd be more afraid if his fury was right justified.
  • It doesn't scale. As programs get bigger, both in lines of code and in number of programmers, you run into more mismatches in indentation. For example, you might want to move or copy a loop that was nested 5 levels deep to another location nested 3 levels deep. Or you might need to integrate code written by programmers that used different numbers of spaces per indentation level. Refactoring tools can certainly help here. But, you know, if you're the sort of programmer who would leave the indentation messed up when you moved that loop, just because your language didn't require you to fix it, then I probably don't want to work with you anyway.

What about novices?

Most of the objections above don't really apply to novices. Programming is new to them so it's all weird anyway. They have no idea what scanners and parsers are. As teachers, we already take away a lot of their freedoms anyway, and we certainly want them to care if somebody (namely us!) can read their code. And novices are usually not going to be writing large enough programs for the scaling issues to be a big problem.

Ok, but what are the benefits for novices?

  • They are already used to the idea of indentation. Both from writing outlines in English class and from nested bullet lists in the (almost) ubiquitous PowerPoint, novices already have experience with the idea of indicating grouping using indentation. This makes such languages much easier for novices to learn. In contrast, explicit markers such as curly braces or BEGIN/END keywords are something novices have much less experience with. However natural such markers might seem to us, they are not natural for novices, and are a constant source of mistakes. (Worse, a typical novice strategy for dealing with those mistakes is to randomly insert or delete braces until it compiles—a strategy Peter Lee used to call “programming by random perturbation”.)
  • Less is more. Or, put another way, smaller is better. To the novice, a fifteen-line program is less intimidating than a twenty-line program, a program that fits on one page is much easier to understand than a program that spans multiple pages. Those extra lines taken up by explict braces or BEGIN/END keywords really add up. Even if you use a style that puts a { at the end of the previous line, the } still usually goes on a line by itself. I shudder now everytime I look at a Java program and see a code fragment like
                 ...
                 }
               }
             }
           }
         }
       }
    Note that I am not advocating compressing everything into as few lines as possible (a la Perl Golf). Nor am I saying that all redundancy is bad. But in this case, the redundancy of explicit markers was hurting more than it was helping.
  • Mandatory indentation promotes good habits. I've taught plenty of novices in languages that did not require indentation. If the language doesn't require it, they won't do it, or at least not consistently. If they are using an IDE that indents for them, fine, but sometimes they need to write code in a primitive editor like Notepad, and then they just won't bother. Even if I require the final submission to be properly indented, all too often they will do all their development without indentation, and then indent the code just before turning it in (kind of like the typical novice approach to commenting). Of course, indenting after the fact means that they don't get any of the benefits from indenting their code, such as making debugging easier.

    On the other hand, if the language makes indentation mandatory, then the novice needs to keep their indentation up to date during the entire development cycle, so they will reap those benefits. Since I started using this language, I've also noticed improved indentation habits even when students switch to other languages without mandatory indentation. I can at least hope that this habit is permanent, although I have no evidence to back that up.

A surprise

I was shocked by how much the mandatory indentation seemed to help my students. I did not come into this expecting much of a change at all. I had experience with mandatory indentation in a couple of languages (most notably Haskell), and I had found it to be a pleasant way to code. Also, I had heard good things about people using Python in the classroom. However, I was by no means a convert at the time that I was designing my language.

I had two motivations for making indentation mandatory in the language. First, this language was designed to be the second language most of the students saw, and I wanted to expose them to a range of language ideas that they had not seen before. For example, their first language used static typing so I made my language use dynamic typing. Similarly, their first language did not make indentation mandatory, so I took the opposite route in my language. My second motivation was simply that I was annoyed. I was tired of students coming to me with code what was either completely unindented or, worse, randomly indented. I figured that making the compiler enforce indentation was the surest way to stop this.

Imagine my surprise when I started teaching this language and found the students picking it up faster than any language I had ever taught before. As fond as I am of the language, I'm certainly under no illusions that it's the ultimate teaching language. After carefully watching the kinds of mistakes the students were and were not making, I gradually realized that the mandatory indentation was the key to why they were doing better. This seemed to manifest itself to two ways, one obvious and one more subtle. The obvious way was that they were simply spending much less time fighting the syntax.

The more subtle way was that they appeared to be finding it easier to hold short code fragments in their head and figure out exactly what the fragment was doing. I conjecture that there may be some kind of seven-plus-or-minus-two phenomenon going on here, where adding extra lines to a code fragment in the form of explicit braces or BEGIN/END keywords pushes the code fragment above some size limit of what novices can hold in their heads. This wouldn't affect expert programmers as much, because they see beneath the braces to the chunks underneath, but novices live at the level of syntax.

Whatever the explanation, I'm now a convert to the power of mandatory indentation for novices. I've never taught Python, but I suspect those who have may have had similar experiences. If so, I'd love to hear from you.

Thursday, February 14, 2008

Games for Programmers: Zendo

In addition to teaching and programming, I also enjoy playing board games. This is the first in an irregular series of game recommendations for programmers. Of course, you don't have to be a programmer to enjoy these games, but if you are a programmer, then I think there's a pretty good chance that you'll like them.

Zendo is a game about debugging. Ok, it's not really about debugging, but you'll see what I mean in a moment.

Zendo is a logic game played with a bunch of Icehouse pieces (plastic pyramids in several different colors and sizes) and some markers.

Icehouse pieces and markers
(photo by Terraliptar)

During the game, pyramids will be placed in groups (called “koans”) that either satisfy or don't satisfy some secret rule. The players try to figure out the rule based on these examples. For example, consider the following group:


(photo by Ted Alspach)

This group might satisfy the rule “at least one piece off the table” or “exactly one tilted piece” or “fewer than two green pieces” or “contains pieces of three different sizes”. It would not satisfy the rule “no two pieces of the same color”.

One player is designated as the master, who chooses the secret rule and judges whether proposed groups satisfy the rule. The master initially sets out one group that satisfies the rule (marked with a white stone), and one that doesn't (marked with a black stone). The other players take turns proposing groups, which the master then marks with white or black stones. Later in the game, players try to guess the rule, based on the evidence they've seen. If a guess is correct, that player wins the round, and a new master is chosen for the next round. If a guess is incorrect, then the master creates a counter-example, either a group the satisfies the secret rule but not the player's rule, or a group that satisfies the player's rule but not the secret rule. Either way, the master marks the counter-example with the appropriately colored stone.

That's the basics of the game. There are a few more details that can be found in the complete Zendo rules. (That site also offers a fascinating design history of the game.)

As a player, you are constantly coming up with possible rules and checking whether they are consistent with the examples that are already on the table. Often you will have more than one possible rule in mind. When it's your turn, you try to design a group that will help narrow down the choices. Ideally, you want a make your group so that either a positive or a negative answer will support some possible rules and deny others.

The primary challenge as master is coming up with a good secret rule, one that is not too hard and not too easy. Of course, too hard and too easy will depend on who you are playing with. Looney Labs sells a pack of Zendo rule cards to get you started. These rules are fairly easy and are perfect for those new to the game. Eventually, you will want to start creating your own rules.

Debugging

Debugging is a fundamental skill of the programmer. When its your own bug or you're operating under severe time pressure, negative emotional reactions can make debugging quite stressful. But when you are helping somebody else debug their code, it can be quite fun, kind of like a logic puzzle.

There are two basic approaches to debugging; most debugging sessions combine both. In one approach, you step through the code, re-thinking each line and looking for something suspicious.

In the other approach, you look at the observable evidence—for this input, it's producing that output, when it should be producing this other output, or the web site crashes every day between 5pm and 6pm— and try to come up with a hypothesis for what might be going wrong. For example, you might hypothesize that something is going wrong when two clients happen to have the same name, or that somebody is mixing up row-column coordinates with X-Y coordinates. Often you try to come up with an experiment that might confirm or deny your hypothesis, or help select between several competing hypotheses. Once you have a fairly solid hypothesis, you can usually narrow down where in the code to go looking for the error.

It's this latter approach to debugging that Zendo captures so well, that of making and testing and refining and discarding hypotheses.

Of course, that's the perspective of a computer scientist. Other kinds of scientists might prefer to think of Zendo in terms of the scientific method.

How to get it

Zendo used to be sold in a nice boxed set, but that is long out of print. In fact, I didn't get my copy until after it was already out of print. I got it as a gift from my BoardGameGeek Secret Santa, a charming annual tradition where hundreds of boardgamers anonymously send Christmas presents (games, naturally) to other gamers around the world, usually gamers that the sender doesn't even know.

You can still buy the pyramids from Looney Labs, where they are sold as Treehouse sets. You would need at least four sets to play Zendo—five would be better. It's kind of expensive that way, but fortunately there are a whole bunch of other games you can play with the pieces. You would still need a set of markers, but poker chips or coins in several denominations work fine.

In fact, with only minor modifications, it's easy to improvise a set of pieces for Zendo out of common household objects, such as bottles of beer.

Friday, February 8, 2008

Ten Years of Purely Functional Data Structures

In 1998, I published a book called Purely Functional Data Structures. Ten years later, the book is still selling well. (Every time I get a royalty check, my wife the skeptic says “People are still buying that?!”) The ten-year anniversary seems like a good time to reflect back on my experience with this book.

Origins

My first introduction to functional programming was at Carnegie Mellon University, where I worked for Peter Lee and Phil Koopman on an implementation of a subset of Haskell (called “Eddie”), written in Standard ML. So right from the very beginning, I was working with a mix of strict and lazy evaluation.

I've always been fascinated by data structures, and I soon realized that something was strange about data structures in these two languages. On the one hand, when they worked, data structures were more pleasant to work with than in any other languages I had ever used before. On the other hand, sometimes they just didn't work.

The issue was immutability. In pure functional programming, all data structures are immutable, meaning that they cannot be changed once created. Of course, data structures frequently need to be changed, so what happens is that you create a new copy of the data structure that incorporates the change, without actually modifying the old copy. This can often be done efficiently by having the new structure share large chunks of the old one. For example, stacks are trivial to implement this way. Tree-like structures such as binary search trees and many kinds of priority queues are almost as easy.

However, some common data structures, such as FIFO queues and arrays, can actually be quite difficult to implement this way. Some people take this as a sign that functional languages are impractical. However, it's not the fault of functional programming per se, but rather of immutability. Immutable data structures are just as awkward to implement in conventional languages such as Java. (Of course, immutable data structures also offer huge advantages, which is why, for example, strings are immutable in Java.)

In 1993, I ran across an interesting paper by Tyng-Ruey Chuang and Benjamin Goldberg, showing how to implement efficient purely functional queues and double-ended queues. Their result was quite nice, but rather complicated, so I started looking for a simpler approach. Eventually, I came up a much simpler approach based on lazy evaluation, leading to my first paper in this area. (When I say simpler here, I mean simpler to implement—the new structure was actually harder to analyze than Chuang and Goldberg's.)

Almost immediately after my paper on queues, I worked on a paper about functional arrays, here called random-access lists because they combined the extensibility of lists with the random access of arrays. (Because the lead time on journal papers is longer than the lead time for conferences, this paper actually appeared before the paper on queues, even though it was written later.) The approach I used, which turned out to be very similar to one used earlier by Gene Myers, was based on properties of an obscure number system called skew binary numbers. This marked the beginning of a second theme of my work, that of designing implementations of data structures in analogy to number systems. For example, inserting an element into a data structure is similar to incrementing a number, and merging two data structures is similar to adding two numbers. I called this approach numerical representations. The idea was to choose or create a number system with the properties you want (such as constant-time addition), and then build the data structure on top of that. The advantage of this approach is that you can often do the hard work in a simplified setting, where you don't have to worry about carrying around the actual data in your data structure.

Amusingly, this paper marked the second round of a circle dance between Rob Hoogerwoord, Tyng-Ruey Chuang, and me. First, Hoogerwoord published a paper on functional queues, then Chuang, then me. And again with functional arrays, first Hoogerwoord published a paper, then Chuang, then me.

Around this time, I was looking for a thesis topic (that period that John Reynolds describes as “making even the best student look bad”). Fortunately, my advisor Peter Lee had patience with me during this search. I considered a bunch of topics related to functional programming, but none of them seemed right. I was enjoying the work I was doing with functional data structures, but there didn't really seem to be enough meat there for a dissertation. That changed one day during a walk in the snow.

I had become fascinated by the idea of concatenation (appending two lists). This is trivial to do in conventional languages using, say, doubly-linked lists. But it's much harder to do efficiently with immutable lists. I thought that I had come up with a new approach, so I made an appointment to talk to Danny Sleator about it. He had been involved in some of the early work in this topic (along with Neil Sarnak and Bob Tarjan), and his office was just a few floors below mine. The day before the meeting, I realized that my approach was completely broken. Afraid of looking like an idiot, I went into a frenzy, playing with different variations, and a few hours later I came up with an approach again based on lazy evaluation. I was sure that this approach worked, but I had no idea how to prove it. (Again, this was a case where the implementation was simple, but the analysis was a bear.) My wife had our car somewhere else that day, so I ended up walking home. The walk usually took about 45 minutes, but it was snowing pretty hard, so it took about twice that long. The whole way home I thought about nothing but how to analyze my data structure. I knew it had something to do with amortization, but the usual techniques of amortization weren't working. About halfway home, I came up with the idea of using debits instead of credits, and by the time I got home the entire framework of how to analyze data structures involving lazy evaluation had crystallized in my head.

With this framework in mind, I now believed that there was enough meat there for a dissertation, and I sailed through my thesis proposal and, about a year later, my thesis defense.

The Book

After my defense, Peter Lee suggested that I try to publish my dissertation as a book. He contacted two publishers he had worked with before, and one of them, Lauren Cowles of Cambridge University Press, bit.

The process of turning my dissertation into a book was pure joy. I thought that the basic organization of my dissertation was pretty solid, so mostly I was able to focus on adding and adjusting things to make it work better as a book. For example, I no longer had the constraint from my dissertation of having to focus on original work, so I was free to add data structures that had been developed by other people.

The main additions were expanded introductory material (such as my simplification of red-black trees, which was developed a few weeks after my thesis defense in a series of emails with Richard Bird), exercises, and an appendix including all the source code in Haskell (the main text used source code in Standard ML).

After several months of hacking, I was able to ship off a giant Postscript file to Lauren Cowles, and that was that.

Well, not quite. Lauren asked me what I wanted for cover art. My wife was an avid quilter, and I liked the idea of having a suitably geeky quilt pattern on the cover. I adapted a quilt pattern called Flying Geese, and sent a sketch to Lauren, who got a graphic artist to draw up the final version. I was quite happy with it. Later, I found out that Flying geese was actually the name of a different pattern, and to this day, I don't know the correct name of the pattern. It looks a little different on the book than in quilts because quilters usually only use two levels of the recursion, but I used three.

Reactions

Initial reviews were positive and the hardcover edition sold out pretty quickly, so Cambridge University Press released a paperback edition the following year. I don't know of any courses that used it as the main textbook (it would have to be a pretty specialized course to do that), but several mainstream courses used it as a supplementary textbook.

Sales were steady but trending downward, until 2004, when a book review by Andrew Cooke appeared on Slashdot. After that, sales doubled and have stayed at that level for several years. Of course, I can't be sure this increase was because of Slashdot, but it seems likely. Thanks, Andrew!

The most interesting place the book has shown up was in the data of a TopCoder programming contest problem involving sorting books on a bookshelf. It was part of a list of CS textbooks. (And, no, I wasn't the author of the problem, although I have written some problems for TopCoder.)

The funniest moment about the book was when I worked at Columbia University. A potential grad student from an unrelated field in CS was visiting from Germany, and I was talking to him about the department. When I told him about my research, he said, “My roommate has been reading a book about something like that. I don't remember the name, but do you know which book I mean?” I reached over and pulled my book off the shelf and said, “Do you mean this one?” His mouth fell open.

The most flattering review of the book appeared a few years ago in a blog by José Antonio Ortega Ruiz, who said

“The exposition is extremely elegant and synthetic, and i’ve spent many, many hours immersed in its 220 pages (i can’t think of any other programming book with a better signal to noise ratio, really).”
Thanks, jao!

Complaints

Of course, you can't please everybody. The most common complaints I've heard are

  • No hash tables. This was mentioned in the first review of the book on Amazon, which called the lack of hash tables a “major omission, likely because they don't fit well into a functional environment.” More than a year later, a second reviewer mentioned that hash tables are in fact in the book, but buried in an exercise. Both reviewers are correct. I only gave a very short description of hash tables in an exercise, because there just wasn't that much to say. A hash table is basically the composition of two finite maps, one approximate and one exact. The choice of what structures to use for those finite maps are up to you. Typically, an array is used for the approximate map, but, as the first reviewer noted, arrays do not fit well into a functional environment.
  • No delete for red-black trees. I present my simplified form of red-black trees very early in the book as an introduction to functional data structures, but I only describe insertions, not deletions. There were two main reasons for that omission. First, deletions are much messier than insertions, and the introduction did not seem like the right place for a lot of messy details. Second, and more importantly, probably 95+% of uses of functional red-black trees don't need deletions anyway. Because the trees are immutable, when you want to delete something, you can simply revert back to a previous version of the tree from before the unwanted item was inserted. This is almost always both the easiest and the right thing to do. The exception is when you inserted something else that you want to keep after the item to be deleted, but this is extremely rare.
  • Why Standard ML? In the last 10 years, Haskell has become steadily more popular, while Standard ML has largely been replaced by OCaml. A frequent complaint has been people wishing that Haskell was in the main text, with Standard ML (or OCaml) in the appendix, instead of the reverse. Even at the time, I debated with myself about the choice of Standard ML, particularly because in some chapters I had to fake extensions to the language, such as lazy evaluation and polymorphic recursion. I went with Standard ML for two main reasons. First, I wanted to be able to distinguish between lazy evaluation and strict evaluation, which was much more difficult in Haskell. Second, I really wanted to be able to give module signatures for all the common structures (stacks, queues, etc.) that I was going to be revisiting over and over. You can sort of do this in Haskell, but not cleanly.
  • Too esoteric/impractical. Guilty as charged! I did try to point out a few data structures along the way that were especially practical, but that was not my main focus. If it was, I certainly would not have included umpteen different implementations of queues! Instead, I was trying to focus on design techniques, so that readers could use those techniques to design their own data structures. Many of the example data structures were included as illustrations of those design techniques, not because the examples were particularly practical or valuable by themselves.

The Future?

People sometimes ask me if I'm planning a second edition. The answer is maybe, but not anytime soon. I'm still pretty happy with what I said and how I said it, so I don't feel a burning need to go back and correct things. If I wrote a second edition, it would be to modernize the presentation and to add some new data structures and techniques. For example, I might include the finger trees of Ralf Hinze and Ross Paterson.

In terms of the presentation, however, the biggest question would be what language to use. Standard ML probably doesn't make sense any more, but I'm still not sure Haskell is ready to take over, for the same reasons mentioned above. I suppose I could switch to OCaml, but I'm not sure that's enough of a change to warrant a new edition. I have faith that the vast majority of my readers are perfectly capable of making the translation from Standard ML to OCaml themselves.

Friday, February 1, 2008

Boolean Confusion

A big part of learning to program is building up effective mental models for how various logical structures work. A big part of learning to teach programming is learning how to diagnose broken or ineffective mental models by reading someone's code. One common confusion that shows up very clearly in the code of novice programmers is thinking of booleans and control expressions as being somehow two separate things.

By “control expressions”, I mean the expressions that are used in if-statements and while-loops. For example, in

   while (i < 100) {
      ...
   }
the control expression is “i < 100”. Of course, the value of the control expression is a boolean. Novices can even tell you that, but they haven't quite internalized it yet.

Suppose you wanted a function to test whether an integer is positive. An experienced programmer might write

   boolean isPositive(int n) {
      return n > 0;
   }
but a novice programmer would be more likely to write
   boolean isPositive(int n) {
      if (n > 0) {
         return true;
      }
      else {
         return false;
      }
   }
Why? Because the function is supposed to return a boolean, but the novice doesn't quite believe that n > 0 is a boolean—it's a control expression!

On the flip side, suppose you have a loop being controlled by a boolean variable named flag. An experienced programmer might write

   while (flag) {
      ...
   }
but a novice programmer would be more likely to write
   while (flag == true) {
      ...
   }
Why? Because the while-loop needs a control expression, and the novice doesn't quite believe that flag is a control expression—it's a boolean! The novice uses the == operator to convert the boolean into a control expression.

In extreme cases, you can even see both cases simultaneously, such as this gem where a public method is checking the state of a private variable

   public boolean isReady() {
      if (ready == true) {
         return true;
      }
      else {
         return false;
      }
   }
What are some programming idioms you've seen that help you diagnose confusions like these?