Saturday, January 14, 2012

Switching to IntelliJ IDEA

IntelliJ IDEA 11 came out recently, and after some frustrations with Eclipse I decided to give the Community Edition a try. To my surprise, I've decided to stay with it, and I thought it was a good opportunity to dust off the old blog to explain why. Maybe it will be useful to others who have been considering trying IDEA for a while but aren't sure if it will be worth their time.

Eclipse
First off, let me give Eclipse some credit. I've been more than satisfied with it over the past 6 or 7 years. I haven't had any major issues with Eclipse, but it's been a bit like death by a thousand stings. Most of the pain comes from a few poor quality plugins or good plugins not playing nicely with each other.

Eclipse's incremental compilation is incredible. It's probably the biggest feature that differentiates Eclipse from IDEA, both pro and con. The pro is immediate feedback. By the time you're ready to run your tests, Eclipse has done most of the heavy lifting and is ready to launch. However, that means the heavy lifting happens while you're editing or otherwise interacting with the UI, which can make the IDE feel less responsive at times. The JDT can also affect some plugins, which I'll get into later.

Enter IDEA
Later I'll list a bunch of small features that don't seem like much individually, but combined make a big difference. But first, the big differences.

Responsiveness
The biggest difference I've noticed with IDEA is its responsiveness in regard to user interaction. It feels lightweight and snappy compared to Eclipse. Best of all, there's immediate code completion with no keyboard combination to press and no waiting. This is huge. There have been several times I've cursed out Eclipse for having to wait a few seconds after hitting Ctrl Space. Especially in Scala code (more on that later).

Maven Integration
Surprisingly, IDEA has better Maven integration than Eclipse, despite Sonatype's great work on m2e. This is most likely an indirect result of Eclipse's incremental compilation. Because of this killer feature, Eclipse cannot delegate the build to Maven. That means m2e has to integrate the whole Maven build lifecycle into the JDT. This is called "project build lifecycle mapping" and the result is more configuration and plugin bloat. What's worse, if you're missing a lifecycle mapping it's an error by default: "Plugin execution not covered by lifecycle configuration". This was the straw that broke the camel's back for me.

Even the pom file editor is better in IDEA. For example, it will autocomplete Maven groupIds, artifactIds and versions for artifacts you don't yet depend on, which has saved me from going outside of the IDE (especially versions). Also, it has built-in completion for Maven plugin configuration parameters in the pom file which I've had to go to the web to look up in the past. "Navigate to Managing Dependency" lets you navigate from a dependency to its corresponding dependencyManagement section, even if its in a parent pom. Plus it has an "Introduce Property" pom file refactoring which is a time-saver.

Scala Integration
Scala integration in Eclipse is improving at a fast pace and soon may surpass that of IDEA, but for now I give IDEA the slight edge. It makes sense because Scala is in the same boat as Maven in Eclipse of having to integrate with the JDT. With Scala this is much more of a problem, though. The sheer amount of work that the Scala compiler has to do means incremental compilation of Scala code significantly affects the performance of Eclipse.

For the most part, Scala "just works" in IDEA, with most of the same features Java has plus the Scala-specific ones you'd expect. It even has a "Convert Java file to Scala" refactoring which is also known as the Ugly Scala Code Generator (TM), but it might save you some time if you're converting a project over. However, I have had some rare phantom errors in IDEA, meaning code marked red in the editor that compiles just fine. Also, since editor parsing is separate from compilation, these don't go away when you do a build or run tests like they might in Eclipse.

The Rest
This is the list of minor improvements which I've found useful. To keep it brief, I won't go into too much detail on each one, so I apologize if these are unclear. Many of these are examples of IntelliJ living up to its billing as an intelligent IDE because they require a deep understanding of your code.

  • Ability to search for actions (Cmd Shift A). I cannot overstate how valuable this has been in my transition. You can look up any action (refactoring, searching, etc.) and execute it while keeping your hands on the keyboard. It also displays keyboard shortcuts so eventually you won't have to look it up again.
  • You can limit the scope of any search (text search, class hierarchy, etc.) to only production code or only test code.
  • Highlights classes as being unused when they are only referenced by tests.
  • Colors tabs differently based on the nature of the class. For example, test classes are green.
  • Colors annotations differently to distinguish them in imports.
  • Rename refactoring prompts to rename constructor and setter parameters in addition to getters and setters.
  • Console output from each test is self-contained, but you can get to the full console output if you need it.
  • Cmd click navigation from a string literal to a file with the same name in the project.
  • "Go To Test Subject" (Cmd Shift T) from a test class navigates to the class under test.
  • Optionally warns you when you're about to commit code with warnings/errors, and can create a separate changelist for you with only these changes if you choose not to commit.
  • "Shelve commits" lets you put aside your current changes so you can work on the current revision (or a different one) if you need to.
  • Autocomplete within string literals.
  • When you rename a class, it prompts you to rename similar variable names where the class is being used.
  • Extract method also finds duplications with different method arguments. I used to manually "pre-refactor" to get this same behavior in Eclipse by first extracting local variables for the intended arguments.
  • Smart step-into when debugging so that you can step into a specific method on a line of code which calls many methods.
  • If you're editing a class file and add a reference to a class name that can't be resolved, there's a quick fix to "Add Maven Dependency".
  • Introduce parameter can move local variable initialization code from within a method to the method's callers.
And here's some features that look useful, but I haven't tried yet.

  • Analyze dependencies.
  • Analyze backward dependencies.
  • Refactor -> replace inheritance with delegation.
Conclusion
It's mostly a bunch of little things that I like better about IDEA. Any one of them wouldn't be enough to make the switch, but combined make me feel more productive even after a few short weeks.

I know it's just a matter of time until I buy the Ultimate Edition, and I'm ok with that. You win this round, JetBrains. ;)

Thursday, March 26, 2009

ARM Blocks in Scala: Revisited

Over a year ago, when I was new to Scala, I tried implementing Automatic Resource Management. I was not pleased with the result, so I ended up recommending ManagedResource from Scalax. After I came across a couple of links to my original post, I decided to point out a different approach that I have seen which seems to work well and appeals to me, personally.

Next, I'll try to figure out where I went wrong originally, but those who are only interested in the new approach can skip the next section.

Unnecessary Complication
One of my original goals was to allow multiple resources per block. This ended up making the implementation much more complicated, especially with regard to exception handling. Also, in my first approach I included an exception handling mechanism which turns out to be unnecessary if we're only dealing with one resource at a time. The biggest problem with my first attempt, however, was that the resource had to be initialized outside of the ARM block, which, when multiple resources are initialized, can lead to the very resource leaks which we are trying to avoid. It was a good effort, however, if I do say so myself, and helped me to learn the language a bit.

A Simpler Approach
Lucky for me, Scala's first expert, Martin Odersky, also thought ARM was a good example of the flexibility of the language. I don't know when he started using ARM in his talks, but the first time I saw it was at JavaOne 2008, and then again at the Scala Lift Off event the following weekend. I will not re-post Odersky's implementation here without his permission (not that I think he would object), but I recommend that you take a look at slide 21 of his FOSDEM 2009 presentation (which was hopefully posted with his permission).

Odersky's approach is much more elegant. It only allows a single resource to be managed per block, which is actually all you need since ARM blocks can be nested to support multiple resources. Exceptions are not caught so that the caller can handle them properly. Simplicity. This ends up being very similar to the approach that C# has taken, so he chose to name the method the same as the C# "using" keyword.

Here's how you use it:


//print a file, line by line
using(new BufferedReader(new FileReader("test.txt"))) {
reader => {
println(reader.readLine)
}
}

//copy a file, line by line
using(new BufferedReader(new FileReader("test.txt"))) {
reader => {
using(new BufferedWriter(new FileWriter("test_copy.txt"))) {
writer => {
var line = reader.readLine
while (line != null) {
writer.write(line)
writer.newLine
line = reader.readLine
}
}
}
}
}


I actually like this approach better than the ManagedResource from Scalax because the syntax feels more natural; it feels like you are using a language feature. I always thought ManagedResource was a bit strange because it felt like an abuse of the for-comprehension, but that's just my opinion.

Since I hadn't seen anyone point out Martin Odersky's "using" implementation, I thought I would post about it. I think ARM is great and the fact that it can be implemented as a library in Scala is phenomenal.

If we're lucky, Java SE 7 will have ARM thanks to Project Coin, but it's getting more and more obvious these days that Java is not progressing as a language. Now if only I could do my everyday coding in Scala....

Wednesday, January 7, 2009

Introducing Pojomatic

New year, new project
I'd like to start off the new year by introducing a project I've been working on for a while now along with Ian Robertson, a colleague of mine at Overstock. The project itself is actually not new at all, considering we began working on it around April of last year and it's based on something similar we've been using internally at Overstock since the early days of Java 1.5. The project is called Pojomatic, and what's new is that it's open source (Apache 2.0 license) and there's a release candidate available (1.0-RC1) on Sourceforge or the Maven central repository.

What does it do?
Pojomatic is a Java library which provides automatic and configurable implementations of the hashCode() equals(Object) and toString() methods inherited from java.lang.Object using annotations. POJOs (Plain Old Java Objects) + automatic implementations of common methods = Pojomatic.

This is a useful because it is generally a good idea to override hashCode() equals(Object) and sometimes toString(). One could manually implement these methods, but that is time-consuming and prone to error (e.g. forgetting to check for null everywhere). Instead, one could have an IDE generate implementations of these methods for you. As with a lot of generated code, this can be like slapping your code with an ugly stick. Besides, I'd often add fields and/or methods to the class later and forget to re-generate new implementations, which may have no effect or may lead to very subtle, hard to detect bugs (e.g. two objects are equal when they shouldn't be => two different people mistaken for the same person => money is deposited to the wrong account => one person is happy, while you are not because you have to work late to track down and fix the bug).

How do I use it?
The easiest way to use Pojomatic is to put one annotation (@AutoProperty) on your class and delegate the desired method(s) to the corresponding static methods in Pojomatic. For example:
@AutoProperty
public class Person {
private final String firstName;
private final String lastName;
private final int age;

public Person(String firstName, String lastName, int age) {
this.firstName = firstName;
this.lastName = lastName;
this.age = age;
}

public String getFirstName() {
return this.firstName;
}

public String getLastName() {
return this.lastName;
}

public int getAge() {
return this.age;
}

@Override public int hashCode() {
return Pojomatic.hashCode(this);
}

@Override public String toString() {
return Pojomatic.toString(this);
}

@Override public boolean equals(Object o) {
return Pojomatic.equals(this, o);
}
}

By default, @AutoProperty tells Pojomatic to automatically detect all of your fields and use them in all of the Pojomatic.* methods. We'll see the different options later, but in the above example, all of the fields will be used for hashCode() equals(Object) and toString() as shown here:
public static void main(String[] args) {
Person johnDoe = new Person("John", "Doe", 32);
System.out.println(johnDoe.hashCode());
System.out.println(johnDoe.equals(new Person("John", "Doe", 32)));
System.out.println(johnDoe.toString());
}

Outputs:
-2068529904
Person{firstName: {John}, lastName: {Doe}, age: {32}}
true

Using all fields for each of hashCode() equals(Object) and toString() is usually not advised, however, so @AutoProperty can be configured to include automatically detected properties in any (valid) combination of these methods (including something in hashCode() without including it in equals(Object) violates the contract of hashCode(), so this is not allowed). Additionally, properties can be configured individually via the @Property annotation. When both annotations are present, @Property is used since it applies only to the one property. Using @Property gives you complete control and makes @AutoProperty optional. A common practice is to have all properties included in equals(Object) and toString(), while only one or two key properties are included in hashCode() like so:
@AutoProperty(policy=DefaultPojomaticPolicy.EQUALS_TO_STRING)
public class Book {

@Property(policy=PojomaticPolicy.ALL)
private final String isbn;

private final String title;

public Book(String isbn, String title) {
super();
this.isbn = isbn;
this.title = title;
}

public String getIsbn() {
return isbn;
}

public String getTitle() {
return title;
}

@Override public int hashCode() {
return Pojomatic.hashCode(this);
}

@Override public String toString() {
return Pojomatic.toString(this);
}

@Override public boolean equals(Object o) {
return Pojomatic.equals(this, o);
}
}
Both @AutoProperty and @Property can use accessor methods (getters) instead of fields in situations where using accessors is more desirable or when a SecurityManager prevents Pojomatic from accessing private fields through reflection.

Briefly, there is also a feature which will let you customize the String representation of each property. For example, this would be useful if one of your properties contains sensitive data such as an account number, credit card number or social security number (see AccountNumberFormatter). Pojomatic provides the ability for you to define your own formatter implementations as well.

Feedback
Pojomatic has been a lot of fun to work on, and I hope you will find it useful. I'm confident that after trying it out, you will not want to go back to handcrafting equals methods or using ugly IDE-generated code instead. Any feedback is appreciated, as well as feature requests, so what do you think?

Monday, October 27, 2008

Learning Wicket

I'm always on the lookout for different ways of doing the things I'm familiar with. Once in a while, I learn a better way of doing things. Even if I don't, I find that it helps me realize what I like and dislike about the techniques that I'm used to.

Take creating web applications for example. I'm familiar with JSF, both with and without Seam. Overall, I would say that I agree with the ideas behind JSF, but in practice it is not easy to be productive in developing a medium to large sized web app. It seems like some things that should be very easy to do are difficult, but maybe that's just me. Seam adds a bit more complexity, but helps overall. To the JSF world of managed beans, Seam adds ways to manage the rest of your objects called scopes. This is a good idea on paper, but feels a bit like using global variables (albeit with managed lifecycles).

So I decided to give Wicket a try. What stood out to me right away is the simplicity of it. There's very little configuration, and no XML other than web.xml (which makes me realize how much I dislike having stuff like page navigation in an XML file). Each page template corresponds to one Java class. Also, there is no EL or OGNL so your markup is sure to have no logic whatsoever, which is great for those of us who prefer to code in Java. That's the bottom-line benefit that I see with Wicket: everything happens in Java code. Which means you get all of the benefits of writing Java code in an IDE from debugging to refactoring, etc. I haven't tried it, but I've read that it's easy to use alternative JVM languages such as Scala and Groovy instead, if you are so inclined.

Wicket is very friendly to the web designer because it does not invade the markup, so the designer can see what they are getting without firing up a web container. For now, I haven't seen any Wicket components which look good "out of the box" (translation: you probably need a web designer). However, there is no reason why there couldn't be such component libraries for Wicket. That will come with increased adoption, if it happens. After all, creating a component library in Wicket means putting your classes and markup in a JAR file, so it couldn't be easier.

Some minor gripes I have with Wicket are that the markup has to stay in sync with your Java class (but all web frameworks I have used also have this problem), it can sometimes add junk to your URLs after a form submit (maybe there's a way around that). Also, there's quite a bit of casting when using version 1.3, but they are adding generics to version 1.4 which should minimize that. Overall, I'm very impressed.

Resources:

Wednesday, March 12, 2008

Ideal command prompt


In setting up my new laptop, I've been experimenting with different command prompt settings in bash. Here is what I've settled on for now (click for a more clear image). It's simple, yet gives me all of the information I need in a format that is easy for the eye to parse. That's because the text color of the full path is inverted from your other colors.

If you are interested in using/tweaking it:
export PS1="\u:\[\e[0;07m\w\e[m\]> "

What's your ideal PS1? Here's a how-to article. It's written with Linux in mind, but the information applies to pretty much any environment where bash is present (OS X, Unix, Cygwin, etc.).

EDIT: My original post did not wrap the color code in \[ and \] which caused the first line wrap to overwrite the current line. Fixed!