Paul Graham wrote an essay a while back arguing that succinct programs (i.e. short) are more powerful, whatever that means. There seems to be correlation between languages we believe are more powerful (e.g. Lisp, Haskell) and shorter programs compared to mainstream languages (Java, C#, C++). Why is this so? Certainly, there is more messy boilerplate required by popular languages. In many cases, higher-order functions can simplify certain code patterns; however, C# and C++ support this fairly well now with lambdas, and Java is sorta’ close with anonymous classes. When I look at how much more succinct Factor (a concatenative language) programs are compared even to Lisp & Haskell, it dawned on me why some languages are more succinct than others: intermediate variables.
In procedural and OO languages, you spend a lot of time defining and using variables in your code, which makes your code verbose. In functional languages, you tend to use function composition a lot, which hides many intermediate variables. For example, “f(g(x))” is shorter than “y = g(x); return f(y);”. In Factor it would just be “g f” without any superfluous variables. Take a look at any chunk of code and imagine if all the variable definitions and uses were removed, leaving only the functions and control-flow. It would be a lot shorter, right? I think that explains a large part of why some languages are shorter than others.
I wrote a concatenative embedded DSL in Scheme to see if programs do become shorter. It doesn’t quite work like function composition nor F#’s pipeline operator. Both of those require that the number of outputs from one function match the number of inputs in the next (usually just 1 arg!). With concatenative languages, the excess arguments are stored on the stack. So if g produces 3 return values but f only needs 2, the extra value is left on the stack for someone else to consume later. In Scheme, the apply procedure is too rigid and multiple values is hopelessly broken. I had to work around both to produce a pitifully inefficient execution engine for this programming style. When I get higher-order words working I’ll post the code.
Of course, the central question remains, what does “power” mean for a language? Graham’s definition is circular: high-level languages are succinct and he thinks they are more “powerful”; therefore, succinct = powerful. I disagree. I don’t have a definition, but I think continuations are a good example of “power”. Languages without continuations give you a handful of control flow operations. If you don’t like them you have to write tons of code to, in effect, simulate continuations. Haskell’s laziness is another example of “power”. Without it you have to jump through hoops to simulate it in another language. Nevertheless, shorter code is nice as long as it doesn’t compromise readability.