Going off IDE

Considering the change

Few days ago I was interviewing someone for an Scala position. The guy is some kind of a Haskell guru(he hasn’t decided if he will join our team yet. I hope he does). For the interview, I decided that we would work on a simple project, mostly to see his work/thinking flow in action and if we could work together. At some point he tells me: “I don’t believe in IDEs”, while setting up Sublime Text for Haskell. Well, that’s a really silly thing to say, in the line of “I don’t believe in cellphones.” But this is a really smart guy…

From that moment on, every time I used IntelliJ, I was “making mental notes” on what I was using it for. I have used Sublime Text for other stuff, like javascript coding, and other obnoxious tasks, so I kind of know what capabilities it has, and which ones it lacks off.

I liked an IDE because it can do:

  1. Step by step debugging
  2. Error highlighting
  3. Run individual unit tests
  4. Code completion
  5. Knowing the types of a long combination of functions

Step by step debugging is useful but it can be replaced with carefully placed logger.debug statements along the code. It is annoying, but I can endure it. Nevertheless, why should I?

I can detect errors as I write & save, by having SBT watching changes with > ~ compile. But again, why should I?

I can run individual tests from SBT. I can copy the name of the test and run the example(I use Specs2) and run it with testOnly, but then… yeah, why!?

The last two items of the list were the ones that made me change my mind. I will try not to use IntelliJ. I’m going Sublime Text 3 for a while, hopefully forever, only to move to another text editor, if a better one shows up.

I’ll keep IntelliJ for working on legacy code, since is better for code you don’t understand. Which takes me to why the last two bullets are the reason for the change.

When using an IDE, your code is “alive”

When working with an IDE, your code “speaks back”. If you make a mistake, it will tell you. If you write something really badly written, but correct, it will accept it, and will give you enough information about that poorly written code, so you can keep on going.

Humans are lazy, at least I am. We will accept any help we can get, as long we trust its source. And then we will create new stuff upon that help. To keep your code clean, understandable, is an important design choice. It is never easy for any non-trivial task. The IDE will help you to navigate your code, thus you can write code hard to navigate.

I didn’t just jumped off the IDE wagon, I did some back and forth between IntelliJ and Sublime Text, so I could feel the differences.  The first thing I noted was that, when in Sublime Text, it felt like I was running with my hands tied behind my back. I understood the code, but at the same time, I wasn’t able to modified it, to do real work with it, at least not in a comfortable way.

I must say, this is not production code we are talking about. It is a work in progress,. I’m still trying to find the patterns for the API.

So I realized, I was counting on the IDE for helping me find the patterns on the types. That’s not wrong per se, but, in a bizarre way, it was making all too difficult. Instead of thinking about what I wanted, I was just throwing combinators at it and “hoping for the best”.

So my first step was, while still on IntelliJ, to disabled the error highlighting. It was annoying for a while, but my code quality improved really quickly. The solution that had been elusive for so long, was apparent almost instantly.

The code shouldn’t be “alive”, because it is not smart enough

Obviously the solution didn’t just showed up because I stopped using error highlighting. Rather, it was because I had to think better and more, and rewrite some portions of the code so the types were less strange, more intuitive, and as result, more correct.

That was what made me decide. I’m going full time on Sublime Text 3 from now on(it has “Go to definition”, which I think is really useful). For a long time I hadn’t really needed code completion. It was a nice help, but really unnecessary. What I thought I needed was the explanation of types from the IDE. Now my code is no longer alive. It is a set of algebraic laws, or so I hope, and nothing else. I do the work, I figure it out.

Maybe some day our IDEs will be smart enough to help us to write smart code. I don’t think we are there yet.

You can definitely write proper code with an IDE. What I’m saying is that the help can backfire, and do more harm than good. I say that we, humans, aren’t strong enough to not fall for the pernicious “help”.

So back to the “I don’t believe in IDEs” comment, even when it might make some sense, I still think is a silly sentence; but then, I think that anything that has the word “believe” in it, is silly.

Advertisements

Graham’s Scan; from imperative to functional

This story actually happened in a few hours, but the ride was the must fun I have had in the last few months(professionally), so I thought in sharing it.

The task

I had to implement the Graham’s Scan in Scala. Given that the algorithm has been around since the 70’s, you’d think it would be a walk in the park. It was more like a game of rapid chess.

GRAHAM_SCAN

1.         Find p0 in Q with minimum y-coordinate (and minimum x-coordinate if there are ties).
2.         Sorted the remaining points of Q (that is, Q − {p0}) by polar angle in counterclockwise order with respect to p0.
3.         TOP [S] = 0                ▷ Lines 3-6 initialize the stack to contain, from bottom to top, first three points.
4.         PUSH (p0, S)
5.         PUSH (p1, S)
6.         PUSH (p2, S)
7.         for i = 3 to m             ▷ Perform test for each point p3, …, pm.
8.             do while the angle between NEXT_TO_TOP[S], TOP[S], and pi makes a nonleft turn  ▷ remove if not a vertex
9.                         do POP(S)
10.                 PUSH (S, pi)
11.       return S

As you can see, the “good stuff” happens in a stack.  All the versions describing the algorithm that I could found, were written this way. So instead of starting from the abstraction of the algorithm, I had to abstract an opinionated interpretation, to come up with a functional version.

First things first, I created a data set of points for plotting in Octave. Octave already has a convex hull function, so I could see the solution before hand.


X = [0.410036859415169 0.16990953347486892 0.7438936112344358 0.9879763068249436 0.09850362854534644 0.27745454018107196 0.2463477117635604 0.0369350463381225 0.7356804213480419 0.979020964716884 0.4104133424281955 0.9566561367393451 0.36405116056182674 0.6743515475246237 0.2014896354108069 0.5937635359142525 0.9990876238009762 0.4753053860797861 0.15127539785912514 0.3109492347362207 0.3710641858259751 0.4981026670484303 0.472172885898987 0.9804204746053471 0.20208956184140103 0.7386385686849415 0.3182368631449125 0.4529941453549946 0.3035123394853474 0.3785220082038615 0.6028255171683854 0.4754941518809386 0.02885621173133124 0.619495881160238 0.03310900644639769 0.2791084654454382 0.8674411544744653 0.393894922057599 0.271592427317065 0.31136184917695364];
Y = [0.0964872038572564 0.8409000674315462 0.5513623119954284 0.9591944778976642 0.3152016460947692 0.16362690913082834 0.31570455065654346 0.5538361485253371 0.7670638114116081 0.015668444677785942 0.3467907029082512 0.07399794456597841 0.12236562106445759 0.9981954727371612 0.9308695268083278 0.8560770018856956 0.6579161973082933 0.5959520081450651 0.13661844095029363 0.6514314696732396 0.19298645312277207 0.00994355691826554 0.7010697212377054 0.1634450128412981 0.3793185478977382 0.868604478585853 0.4590548622219143 0.18377356484707785 0.2866066023269005 0.8213505729786262 0.38257319264118383 0.4566803134679752 0.4748548161642837 0.26842322605193014 0.5464910173382869 0.6503654422352256 0.9950193150372821 0.730931140118583 0.5869777402729242 0.06419667711655552];

k = convhull(X,Y);

plot (X(k), Y(k), "r-", X, Y, "b+");

 

octave

There, a reliable sample from octave

Starting

First, I found a cool way of sorting the points by polar angle:

val coordList = origin :: coords.filterNot(_ == origin).
        sortBy(point => atan2(point._2 - origin._2, point._1 - origin._1))

Just to be sure, I plotted the sorted points without any further processing:

incomplete

Beautiful polar angles!

In another article, I found this amazing formula for deciding if a point makes a left turn in relation with a line created by the other two:

// isLeft(): tests if a point is Left|On|Right of an infinite line.
//    Input:  three points P0, P1, and P2
//    Return: > 0 for P2 left of the line through P0 and P1
//            = 0 for P2 on the line
//            < 0 for P2 right of the line
//    See: Algorithm 1 on Area of Triangles
inline float
isLeft( Point P0, Point P1, Point P2 )
{
    return (P1.x - P0.x)*(P2.y - P0.y) - (P2.x - P0.x)*(P1.y - P0.y);
}

So, P2 is the new point, P1 the last point added to the hull, and P0 the second to last.

Misinterpretation.

In my first version I was doing the following:

  1. Add the first three points to the hull.
  2. After every new point, check if it makes a left turn in relation with the two previous
  3. If it does, add it to the hull.
  4. If it doesn’t,  remove the last point from the solution and add the new one to the hull.

Sounds reasonable right?

I wrote the code and plotted the result in Octave.

 

foreigner

Oh noes! it doesn’t work!

Yes! No! It found all of the points of the hull, but it kept also the wrong ones. Is hard when a perfectly reasonable code doesn’t do you think it should do. Anyway, I knew that the algorithm was fine, I knew that the points were sorted in a proper way. So it had to be my interpretation of the algorithm. After a careful  reading of the pseudo code above, I realized that before adding a new point, I needed to scan the already found vertices in relation with the new point, and removing  P1 every time a non-left angle were found.

foldRight

The coolest moment for me was realizing that instead of the stack, I could just use foldRight. That gave away the solution.

case class Point(x: Double, y: Double) {
    //improves readability
    def goesLeft(p0: Point, p1: Point): Boolean =
      (p1.x - p0.x) * (this.y - p0.y) - (this.x - p0.x) * (p1.y - p0.y) > 0
  }

  def scan(coords: List[Point]): List[Point] = {
    //instead of using PUSH and POP, foldRight
    def addToHull(hull: List[Point], new_point: Point): List[Point] =
      //the new point is always added but filtering all the non-left turn vertices from the current hull
      new_point :: hull.foldRight(List.empty[Point]) {
      case (p1, currentHull@(p0 :: _)) =>
        if (new_point.goesLeft(p0, p1))
          p1 :: currentHull
        else
          currentHull
      case (p, currentHull) => p :: currentHull
    }
    val min = coords.minBy(_.y)
    //adding the min to close the polygon on octave
    min :: coords.sortBy(point => atan2(point.y - min.y, point.x - min.x))
      .foldLeft(List.empty[Point])(addToHull)
  }

good

It worked!

I created a project in GitHub, that also has a java 8 solution, not very functional.  What the test suites actually do is to generate random samples and a Octave file with the solution for plotting.

Injecting functions

Let’s say we have two ways of getting the same information. We’ll use one or the other depending of the context of our code; and there is no guarantee that a third way won’t appear.

So, this is our scenario, it could’ve been written in Java:

//Returns "12" when the parameter v is "a",
//"3" if v is "b"
//and "-1" otherwise.
def fetch_info(v: String) = v match {
 case "a" => "12"
 case "b" => "3"
 case _ => "-1"
}
//concatenates the string parameters
def action1(param1: String, param2: String, common: String) =
 param1 + param2 + common
//with the first parameter, retrieves a string from fetch_info
//and concatenates it with the common parameter
def action2(param1: String, common: String) =
 fetch_info(param1) + common

val param3 = fetch_info("b")
//both return "123"
action1("1", "2", param3) == action2("a", param3)

As I said, we have two ways of getting the same information. In one case, the processing is straightforward, just concatenates the Strings. The second case could happen when this Strings are not at hand, so we need to call fetch_info to gather the missing information.

We could realize that the common operation is the concatenation of the common parameter with either the result of fetch_info or the concatenation of  param1 and param2. So we can create two functions, both use common_action to create the final result.


//common operation
def common_action(param1: String, common: String) =
 param1 + common
//concatenates the the string parameters
def action1_common(param1: String, param2: String, common: String) =
 common_action(param1 + param2, common)
//with the first parameter, retrieves a string from fetch_info
//and concatenates it with the common parameter
def action2_common(param1: String, common: String) =
 common_action(fetch_info(param1), common)
val param3 = fetch_info("b")
//both return "123"
action1_common("1", "2", param3) == action2_common("a", param3)

We could leave it there; but what if we know that a third kind of action could come up? We could implement the Bridge Pattern and/or the Adapter Pattern — hence the title of the post, the Bridge Pattern uses Dependency Injection;  but with Scala there is a another option, a functional one.


//takes two parameters and returns a function that takes a String
//and returns a String
def action1_2(param1: String, param2: String): (String) => String =
 (c) => param1 + param2 + c
//takes one parameter and returns a function that takes a String
//and returns a String
def action2_2(param1: String): (String) => String =
 (c) => fetch_info(param1) + c

val param3 = fetch_info("b")
//both return "123"
action1_2("1", "2")(param3) == action2_2("a")(param3)

I know that looks a little useless, but is good to understand what’s happening.

Both action1_2 and action2_2 return a function that takes a String and returns a String. That means that action1_2(“1”, “2”) and action2_2(“a”)  actually return functions of the form:

(String) => String

That’s why we use these closures, they create the functions we need.

(c) => param1 + param2 + c
(c) => fetch_info(param1) + c

We kind of “normalized” the operation. Now we can pass the resulting function around, gaining abstraction. That means we can do something like this:

//this operation takes a param for retrieving the "3" and
// then applies the "action" to get the final result
def big_operation(param: String, action: (String) => String): String =
 action(fetch_info(param))
//both return "123"
big_operation("b", action1_2("1", "2")) == big_operation("b", action2_2("a"))

big_operation is isolated from the function that obtains the information big_operation needs. Which means that if a third way of getting “123” appears, big_operation wouldn’t have to change(we are assuming that big_operation is big, with a lot of logic).

But how to reason about this? Is no like we want to keep refactoring code for the rest of our life; well, we will, but the refactorization could be reduced if we came up something like big_operation since the beginning.

In this case, the best way of finding a pattern is to find the common operation:

def common_action(param1: String, common: String) =
 param1 + common

Not the actual function of course, but the pattern of having one common element that has to be used with some other calculable element. Meaning:


(common) => param1 + common

Assuming we have param1 in scope. Which we have in action1_2(“1”, “2”) and action2_2(“a”). That usually means that we can “bind” the common parameter as dependency to partial operations. That sounded weird, but it implies that we need to return a function that takes common as a parameter. I said “partial”,  because we are actually returning a function that has already a few parameters applied, but still need another(s) to return a proper result.

That’s why we returned the closures, and that’s a cool way of doing Dependency Injection (-ish) with Functional Programming

Is not a bad idea to take a look to:  The Neophyte’s Guide to Scala Part 11: Currying and Partially Applied Functions. And the whole series for that matter.

 

Grasping immutability

I work in a Scala shop. We are a small start-up. When I arrived, the CTO was the only developer on the back-end and  we were inheriting a large ill-written ruby codebase. Fortunately, the CTO told me that the first “task” was to rewrite everything in Scala. So we did, and now we are an Scala shop.

Our lifes are fulfilled with happiness; but that’s not true for the new members of the team. The company is in South Florida, a place were only old people and spring-break-teens want to move in. So we pick developers that are smart and eager to learn, even if they don’t have any experience with Scala or functional programming in general, which they never have.

The new members learn as they go, since we can’t wait for them to be comfortable with Scala. This has gave me the opportunity of witnessing all kinds of code: from the carefully crafted that our CTO produces, to the crude, and often pragmatic, that our less experienced developers write.

One concern of theirs is that they might not be writing a “worthy” code. I also had that problem when I started. I have come to consider that craving for the “right’ code, is one of the worst dangers when learning Scala. There are too many ways of doing it right, the doubt can be paralyzing.

One of the developers asked me if I was OK with his code having vars. I told him I didn’t really care, but he should be careful about passing those vars around, they should have a limited scope; then I gave him a little talk about immutability and its benefits. This happened a few times. Of course, what he understood each time was that vars were OK, and he just kept going on with his work. Eventually he hit a wall. His code had a bug, one of those hard to grasp. His code worked well when testing under controlled conditions, but always failed when the concurrent reality was added into the mix.

Then something happened that made all those little talks to worth their while: I suggested him to look for shared state somewhere. He found the bug, and I could see right there, when he was telling me about his findings, that he finally understood the benefits of immutability.

 

 

Functions that return functions, how I reason about it

Since I started to learn and work with Scala and functional programming in general, there has been one thing I never really got my head around.

Why would I want to make a function that returns a function?

Funny enough I found the answer in a Clojure book[1]; I was reading a chapter about creating a minimalistic logging framework. I was assuming that all I was being told was old news, and then I got my enlightenment: it is awesome that functions can return functions.

Anyway, for those who haven’t got it yet, here is why you would want to have functions returning functions, the code is in Scala:

def optimist(a: Int, b: Int): Int = a + b
def pessimist(a: Int, b: Int): Int = a - b

def choose_operation(feeling_good: Boolean): (Int, Int) => Int =
 if (feeling_good) optimist else pessimist

val a = 2
val b = 1

val operation_good = choose_operation(feeling_good = true)
operation_good(a, b) == 3

val operation_bad = choose_operation(feeling_good = false)
operation_bad(a,b) == 1

So, a little detail about what just happened.

Our code has to possible operations: optimist, pessimist

They both take two integers as parameters, do something with them and return a result.

The function choose_operation, based on the parameter feeling_good, returns a function that takes two integers, do something with them and returns a result.

So, we evaluate choose_operation, and we use what ever it returns to calculate the result of a and b.

In this manner we can “configure” how certain operations are perform, based on some initial context. In other words, the fact that functions can return functions, is another way to get rid off the need of passing objects with some state in them.