Skip to content

Are We There Yet?

I just watched Rich Hickey’s talk – “Are we there yet?” for the second time. Awesome presentation that I highly recommend you put on your list to watch.

Clojure for Beginners

At the moment on the Clojure mailing list there is a large rambling discussion about Clojure needing to be easier for beginners. There is some merit to this idea but I find this type of discussion doesn’t actually produce anything worthwhile.  Despite the large amount of hot air a couple of people have stepped forward with some useful videos for setting up Clojure in Eclipse and IntelliJ Idea.

Laurent Petit posted the following video on the Eclipse plugin:

and Greg Slepak posted the following video (via his blog):

If you’re considering starting with Clojure these are reasonable alternatives despite being somewhat immature. Personally I made life difficult on myself deciding to use Emacs. Emacs is great but the learning curve is horrible. Learning a new language and learning a new (difficult) editor at the same time is tough (although ultimately I’m glad I did it).


I’ve been an Ant user for a long time. Ant has many flaws as a build tool but it does have the advantage of being simple, predictable and reasonably well documented. Maven has been around for a long time however while I liked the fact that it gives you lots of functionality out of the box I dislike how rigid and opaque it is.

Recently I’ve started to use Gradle. Gradle is an extremely impressive project. Even though it has yet to reach the 1.0 milestone, it has extensive documentation and is obviously well thought out.

Like Maven it supports impressive out of the box support functionality with very little work. However it makes setting up custom configurations much easier – at least in my opinion. Ant tasks are also available within Gradle should you have a requirement to use them.

Although I’ve only used it on smaller single project builds from what I’ve read about multi project builds it looks simple and easy to deal with.

The underlying dependency technology uses Ivy which means it automatically gets full compatibility with Maven repositories while keeping the flexibility of ivy configurations.

One of the plugins which I particularly like is the Idea plugin. It makes it easy to create a full IntelliJ Idea project with all the dependencies setup to point straight into the local ivy cache. Unfortunately I didn’t manage to get it working correctly for a multiproject build but otherwise it is still great for creating a new project from scratch.

Gradle is definitely something to look into if you’re looking around for a build tool. Some basic Groovy knowledge would be an advantage however.

Simple Macro Followup

Nguyen (sorry wordpress stuffed up your name) offered up a simplified version of the cond macro. Behold:

(defmacro condd [pred & clauses]
  `(letfn [(predd# [test-expr# expr#]
                   (->> test-expr# ~pred))]
          (condp predd# nil

If we macro expand it:

(macroexpand-1 '(condd is-two?
                       1 (print "was one")
                       2 (print "was two")))

We get (after cleaning it up):

(letfn [(predd [test-expr expr]
               (->> test-expr is-two?))]
       (condp predd nil
              1 (print "was one")
              2 (print "was two")))

Which is pretty much the same like writing the following:

(condp (fn [expr _] (->> expr is-two?)) nil
       1 (print "was one")
       2 (print "was two"))

So it seems than condp can be bent to our will by passing nil as the expression then ignoring it in the predicate function. Thanks Nguyen.

Simple Macro

I recently had a requirement for a version of cond that took a predicate expression and passed the clause as a parameter. While condp was close it didn’t really match what I was trying to do. Macros are still new to me so this was a good learning experience.

Here’s an example of the code I wished to write:

(conde app/key-pressed?
	:up (move-player state 1 -1)
	:down (move-player state 1 1))

Alternatively the predicate can be a form, the clause is added to the end of the form. For example:

(conde (= 2)
       1 (print "was one")
       2 (print "was two"))

Since cond is almost what I wanted I took that code and altered it to get the following:

(defmacro conde
  "Takes a predicate form and a set of test/expr pairs. It
  evaluates each test one at a time against the predicate.  If a
  test returns logical true, conde evaluates and returns the value
  of the corresponding expr and doesn't evaluate any of the other
  tests or exprs. (cond) returns nil."
  {:added "1.0"} [pred & clauses]
  (when clauses
    (let [pred (if (not (seq? pred)) (list pred) pred)]
      (list 'if `(~@pred ~(first clauses))
            (if (next clauses)
              (second clauses)
              (throw (IllegalArgumentException.
                      "cond requires an even number of forms")))
             (cons pred (next (next clauses))))))))

In the end the changes are pretty simple but it took me a bit of working out at the time. I’m just not used to macro programming yet. So some questions to the lazy web:

Firstly: can condp be used to achieve the same thing?

Secondly: the code checks whether the predicate is a sequence, this seems a little awkward. Is there a better way of doing this?

Self running clojure batch file.

I picked this up from the mailing list. I didn’t want to forget about it so I’m posting it here as a reminder.

😡 (comment
@echo off
java -cp d:/products/clojure/clojure.jar clojure.main "%~f0" %*
goto :eof

(println "Hi!" *command-line-args*)

Very handy.

Functional Thinking

I ran across an interesting blog post the other day from Jeff Gortatowsky. He has been trying to learn clojure and was having a hard time thinking in a functional manner. It resonated with me because I’ve also found that aspect of clojure to the the most difficult to learn. For the most part, picking up the syntax has been easy but learning to program without actually mutating values is a lot harder.

As part of his learning process he presented an interesting little problem.

take a vector of intra-day stock prices for some particular stock (numbers) and determine the maximum profit you could have made buying that day and selling that day

I made the initial mistake of thinking that the problem involved finding a list of all possible best trades and summing them up. However after reading more carefully the problem only really calls for making a single best trade.

After a bit of thinking it seemed to me the way you’d go about solving this is to look at any given price, find the highest sell price available after that price. By looking at each price in this way you can find which is the best trade.

I gave this a shot myself but ended up with a solution that produced the correct value but had a bug so I won’t both you with the that code. More interesting are some of the solutions other people submitted to this problem.

The first solution is from Lee. I’ve reformatted it to make the structure more clear

(def prices [23, 24, 21, 23, 45, 33, 22, 1, 33, 4, 2, 23])

(defn prof [[p & r]]
  (if (boolean r)
    (cons (-
	   (apply max r)
	  [(prof r)])

If we run this we get the following results

(prof prices)

=> (22 (21 (24 (22 (-12 (0 (11 (32 (-10 (19 (21 -1)))))))))))

To get the actual answer we need to find the max:

(apply max (flatten (prof prices)))

which gives 32.

This solution isn’t really my favourite but it’s worth going through how it works.

They key to this is that it is recursive. It loops through each value in the list and finds the difference between that value and the maximum in the remainder in the list. Since these datastructures are immutable we end up with a nested list at the end of the computation which then has to be flattened and the maximum calculated.

Mac gives a solution I really like:

(reduce max (map - prices (reductions min prices)))

The function ‘reductions’ is from contrib seq-utils (it will be in core in 1.2 I believe) and I’d never actually run into it before. It turns out it is really useful. Lets take a look at the docs:

Returns a lazy seq of the intermediate values of the reduction (as per reduce) of coll by f, starting with init.

That sounds pretty cool but what does it mean? Lets take a look at what results it produces

(reductions min prices)

;; input: [23, 24, 21, 23, 45, 33, 1, 22, 33, 4, 2, 23]
;; output: (23 23 21 21 21 21 1 1 1 1 1 1)

You can see that it decends from the first number down to the smallest number as it encounters it.

That is pretty cool & all but how does it help us solve the problem? To answer that lets take a look at the next bit.

(map - prices (reductions min prices))

;; output: (0 1 0 2 24 12 0 21 32 3 1 22)

So what is going on here? What’s happening is that the min prices reduction is being subtracted from the prices. Just think that through for a while. The minimum prices reduction represents the smallest value we’ve seen at a particular point in the list. What we’re trying to calculate is the lowest price we can sell at (a min price) and a price we can by at (the current price).

It is a very clever solution and probably not something I would have thought of even if I did know about the reductions function.

Frode has another interesting solution (reformatted for clarity)

(defn calc-profit [prices]
  (first (reduce
	  (fn [[delta low] price]
	    [(max delta (- price low))
	     (min price low)])
	  [0 Integer/MAX_VALUE]

This wasn’t too easy for me to see what was going on at first glance. We have a reduce that takes an anonymous function, an initial value and the list of prices. The key to understanding this is going to be the anonymous function. The first thing you can see is that it destructures the first parameter into ‘delta’ and ‘low’. The makes sense since the initial value is an array rather a number. The function also returns an array as well. This is a bit different from what I would have expected. Normally if you were to look at a more traditional reduce such as (reduce + [1 2 3]) we’re dealing with the same thing that is actually in the array – plain old numbers. By seeding the reduce with an array however you can use it to hold extra information as we go along and perform the calculation. In this case in the first slot we hold the maximum price differential we’ve seen so far between the current price and the lowest price we’ve seen. In the second slot we hold the lowest price we’ve seen. Once we are done we just take the first value and that is our best profit.

In many ways this is actually very much like the reductions solution. Very elegant although maybe not as clean looking on page.

So there you go. Proof that you can learn a lot from studing other peoples code.

Train wrecks in functional languages

Recently @Steve_Hayes twittered the following:

Why are functional trainwrecks good and oo ones bad? What’s the conceptual difference?

Since twitter is a little inadequate to answer this question directly I’ll attempt to outline my view of the difference on this blog. I’ll first put forward the disclaimer that I’m certainly not an expert in functional languages. I’ve never let my ignorance get in the way of having an opinion before so why stop now.

Firstly lets start out by defining a train wreck in an OO context. Typically they would look something like this:

String postcode = order.getCustomer().getAddress().getPostcode();

The code above has the property that it tranverses fairly deeply into the object hierarchy effectively creating a dependency on that code from far away.

If we were to split that code up like follows:

Customer customer = order.getCustomer();
Address address = customer.getAddress();
String postcode = address.getPostcode();

We still end up with code that does the same thing but we might not describe it as a train wreck any longer. Whether the first or second form is better is probably a matter of style. They both suffer from the same problem which is violating the law of demeter.

Wikipedia has this to say about the law of demeter:

More formally, the Law of Demeter for functions requires that a method M of an object O may only invoke the methods of the following kinds of objects:

1. O itself
2. M’s parameters
3. any objects created/instantiated within M
4. O’s direct component objects

In particular, an object should avoid invoking methods of a member object returned by another method.

So the real problem with a train wreck is that it can be the sign of a violation of the law of demeter.

While I like the principle of the LoD in thoery, in practice it can be hard to follow sometimes so I have to question whether the odd train wreck is necessarily bad.

But anyway so far I haven’t really gotten into train wrecks from the point of view of functional languages. One question you need to asked is, is it even possible to have a train wreck in a functional language? You can certainly chain method calls together in ways that look a little like an OO train wreck but you’re not traversing an object graph so is it really the same?

In fact in functional languages things work a little differently with respect to functions and data. One famous quote from Alan Perlis is:

It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.

It may be a generalization but good functional programs tend towards the 100 functions, 1 datastructure part and OO programs tend to fall in the 10 functions and 10 datastructures part.

In an OO language you generally want to design your objects to have as few dependencies as possible. Each class becomes a encapsulation boundary. A train wreck is a clear violation of that concept. In a functional language there are fewer core abstractions and the developer operates on them in a variety of different ways.

In fact this applies to not just to having fewer data structures but also to having fewer abstractions. Take Clojure for example. It uses the sequence abstraction everywhere. This allows us to string together functions in really interesting ways. Take this small example:

(map inc (take 5 (iterate dec 5)))

Here we’re stringing together lots of little function calls to achieve an interesting calculation. It looks like a chain wreck but isn’t really the same thing at all since it’s just working with sequences and values at each point.

So how about a more apples to apples comparison.

(def person {
  :name "Mark Volkmann"
  :address {
    :street "644 Glen Summit"
    :city "St. Charles"
    :state "Missouri"
    :zip 63304}
  :employer {
    :name "Object Computing, Inc."
    :address {
      :street "12140 Woodcrest Executive Drive, Suite 250"
      :city "Creve Coeur"
      :state "Missouri"
      :zip 63141}}})

Here we’ve idenfied the details for a person. Immediately we can see it differs from the OO version. Firstly rather than objects we’re using plain old maps. Rather than defining several objects to hold the address and employer details we chose to nest this data. The data can be accessed in a few different ways:

(-> person :employer :address :city)


(((person :employer) :address) :city)


(let [employer (person :employer)
      address (employer :address)
      city (address :city)]

All “train-wrecks” in their own manner. The difference I guess is that we no longer have the same boundaries we have with our OO code. By using maps to store our data we’ve now got access to many of the functions provided by maps. Maps are also sequences so we’ve got access to all functions that operate on sequences as well. Even our :keywords actually operate as functions. Actually come to think of it the map itself is even a function too. Can you see the whole – many functions, few data structures – thing in play now?

The LoD thing doesn’t really make sense in this view of the world, while in an OO world it certainly does.

Fun with Clojure

I really like playing around with new languages but usually I don’t stick with them too long. My day job is coding Java/Javascript and there are few opportunities at work to play around with other languages. As a result most of my experimentation ends up being after hours. Once Ive learned a new language without a practical use to put it to I generally tire of using it.

More recently I’ve started learning Clojure. Clojure has really kept my interest more than anything I’ve learned in recent years. I won’t bother giving an introduction to this language. There are already some excellent resources for that. What I will do it just rattle off some of the things I really like and dislike about the language.

Firstly we have the language itself. It’s a lisp but it’s not the same as Scheme or Common Lisp. Clojure is built on top of the JVM and this makes it a particularly practical language despite it’s young age. The full Java ecosystem is available to use. It very much brings it’s own flavour to the table while keeping many of the things that make lisps great – macros, simple syntax etc. It takes the functional aspects from lisp and turns it up by making immutability the default thing.

I’ve heard Clojure described as having controlled mutability. That seems to be a fairly accurate description. The mechanisms it provides for mutability go beyond standard lock and wait. One of the more novel concurrency features is support for a thing called software transactional memory. This basically brings database like concurrency to every day variable access. While not new STM isn’t widely used so it will be interesting to see how it pans out in a language designed from the ground up to use it. There are plenty of other concurrency mechanisms and you can even go back to plain old locks if it suits your particular problem but what I like about STM and the other concurrency options is that they make reasoning about concurrency much easier. Be sure to read Rich Hickey’s essay on values/state and identity for a great rational on why Clojure handles mutability the way it does.

The biggest problem I had adapting to Clojure has been dealing with a world where I can’t mutate values. It changes the entire way you go about solving problems. Many times I’ve felt I’ve been relearning how to program. Usually coding this way takes me a little longer, no doubt because I’m just not used to coding in a purely functional manner. The resulting code generally turned out simpler in the end however. The reason for this I believe is that without mutability time no longer becomes part of your solution so your code becomes much closer to a pure mathematical expression. This, combined with Clojures terseness and composability, make for some expressive programs.

But enough gushing – there are definitely warts too. While being a language designed for the JVM has definite positive points – it certainly has its down sides too. Clojures types tend to mirror Java’s types. For the most part this isn’t an issue but in order to represent things like numbers Clojure uses the Object representation of numbers. For instances doubles get represented using the Double object. While you can construct real doubles these will always be automatically be cast to Double objects when passed to other functions. This means a lot of stuffing around when trying to get good numeric performance. Later versions of Java have escape analysis but this only gets you so far (and isn’t on by default AFAIK).

My other complaint with clojure is the stack traces. Even for trivial programs they’re long, verbose, nested multiple times and aren’t demangled by default. They also tend not to include much in the way of context.

Lets take a look at an extremely simple example. Real world examples tend to get a lot more verbose.

Say we have the code “(map)”. Since map expects at least two parameters we expect this to fail. Here are the results.

Exception in thread "main" java.lang.IllegalArgumentException: Wrong number of args passed to: core$map (test.clj:0)
        at clojure.lang.Compiler.eval(
        at clojure.lang.Compiler.load(
        at clojure.lang.Compiler.loadFile(
        at clojure.main$load_script__7405.invoke(main.clj:213)
        at clojure.main$script_opt__7442.invoke(main.clj:265)
        at clojure.main$main__7466.doInvoke(main.clj:346)
        at clojure.lang.RestFn.invoke(
        at clojure.lang.Var.invoke(
        at clojure.lang.AFn.applyToHelper(
        at clojure.lang.Var.applyTo(
        at clojure.main.main(
Caused by: java.lang.IllegalArgumentException: Wrong number of args passed to: core$map
        at clojure.lang.AFn.throwArity(
        at clojure.lang.RestFn.invoke(
        at user$eval__1.invoke(test.clj:1)
        at clojure.lang.Compiler.eval(
        ... 10 more

The stack trace isn’t too bad length wise, in real examples they tend to be much larger. Given my test code is only 1 line long I don’t really want to know about the Clojure compiler internals. Here are some other problems as I see them.

  • It includes a nested exception which uglies up the stack trace. We really only care about the root cause.
  • The line number referenced at the top (test.clj:0) is different from the line number in the root stack trace (test.clj:1).
  • The error doesn’t include the actual number of arguments passed or the expected number of arguments. For extra points it could have included the documentation string of the method we were trying to call since that information is included in the metadata.
  • The stack trace contains a lot of internal clojure calls which obscure where the real error is. In an 18 line trace like this one it’s not hard to search through but frequently real world stack traces can span over 100 lines.
  • Clojure function names are mangled. They need to be translated to work out where the real error is.

JRuby stacktraces aren’t reported like this so I see no reason why Clojure stack traces need to looks like Java ones. So in general while the output produced is adequate it’s not really optimal IMHO.

On the other hand one really helpful thing is having a REPL available. Lisps work really well with a REPL. The immediate feedback you get from a REPL is a huge help during development and testing. It’s easy to create little experiments during coding then paste them back into your main code if they work out.

In fact with EMACS and slime I can easily modify my code directly and send it straight to the REPL with a single key sequence. Instantly changing method definitions in running programs. Powerful, useful and wonderful. In fairness to Java you can kind of get that with hotswap but it is slower and more limited.

There’s so many other things to like about Clojure. Destructuring is a great way to pick out the bits you need from a data structure. The persistent data structures mean sharing stucture without worrying about aliasing problems.

Another feature I love is the way Clojure makes a lazy sequence out of everything. Sequences are a powerful abstraction that are a core part of clojure. Finally a language that makes composition practical. In many languages instead of libraries you get frameworks however with Clojure I’ve noticed very few frameworks. I believe the reason for this is because composition in Clojure is much easier than with most languages. Most frameworks seem to be to built get around the limitations caused by the host language. They suffer from the problem of forcing you to code your application around the framework rather than simply making use of functionality in an API. The combination of features such as macros, sequences, first class functions and multimethods provide a the ability to define much more flexible APIs than most languages.

Since I mentioned multimethods, let me go into that for a bit. For many years I’ve been a big believer in object oriented programming. Part of the draw of object oriented programming is the ability to polymorphically dispatch calls based on the type object. I’ve always considered this to be a core feature of OOP. Clojure is a functional language but it turns out you don’t really need to have objects to support polymorphism. This may be obvious to those exposed to multimethods in other languages but it was something I’d never considered before. It turns out that Clojures support for polymorphism is more flexible than a typical OO language because you control the definition of the dispatch function yourself. This means that it is possible to apply polymorphism in ways that would be completely alien in an OO language.

Anyway I’ve been rambling long enough probably. I could easily list many more things I like, but for the moment I’m done. Thanks for reading.

Nested Ant Properties

Ant doesn’t support nested properties (directly). After some searching I came across this solution. I’m putting it here mainly as a reminder to myself if I ever need to do this gain.

Just to define the problem… say I want to set the version number to be dependent on the project. Naturally you might think to try something like this…

<property name="project.version" value="${${}.project.version}"/>

…but it doesn’t work. The solution that does work is the following:

<macrodef name="propertycopy">
  <attribute name="name"/>
  <attribute name="from"/>
    <property name="@{name}" value="${@{from}}"/>
<property name="project.version.prop" value="${major.project}.project.version"/>
<propertycopy name="project.version" from="${project.version.prop}"/>

Ugly, but workable.