A Fast Equals() – Remember to Always Measure

For years, I thought I had the one, true answer to Equals() from seeing something in some MSDN article a long, long time ago – like 2002 or 2003. Or maybe it was on Brad’s or Krystof’s blog and freaked out because I wasn’t doing it. Whatever the case, I’d make sure to point out the “proper” way to do Equals() to my colleagues. And I always made sure that I’d do it the same way for all my types that needed Equals overridden. Then I decided to measure it.

So, what did I think the best way to do Equals was? Consider this type:

    class MyClass
    {
        public int NumberValue;
        public string StringValue;
    }

If I were to write Equals the way I used to, I would write it the following way:

        public override bool Equals(object obj)
        {
            if (obj == null || obj.GetType() != GetType())
                return false;
            if (ReferenceEquals(obj, this))
                return true;
            MyClass other = (MyClass) obj;
            return other.NumberValue == this.NumberValue &&
                   other.StringValue == this.StringValue;
        }

Note that the above implementation suffices the conditions for a robust Equals. The important part of Equals() is that it covers all the following cases:

  • it returns false if obj is null;
  • it returns false if obj is not the same type as this;
  • it returns true if the references are the same;
  • it doesn’t throw Exceptions; and
  • it doesn’t allocate memory.

The actual evaluation of equality is per class and changes for every class. In the example above, the equality evaluation is the return statement comparing the string and the int of both MyClass instances. The above list of conditions is boilerplate and should be met for every Equals method you write.

So what’s the problem? My Equals method does everything in that list just fine. Right?

Two of the conditions are trivial to meet: the check for null and check for reference equality. The hard one to meet, perhaps because are there so many ways of doing it, is checking for the right type. In my method above, I check the type by comparing GetType() of both obj and this. If they aren’t equal, I return false. That turns out to be 5 times slower than the other two ways of doing it: the is and as operator.

The .NET Design Guidelines recommend you use the as operator to check the type rather than the is operator because it does the type check and assignment all at once. So let’s re-write the Equals method to use the as operator:

        public override bool Equals(object obj)
        {
            if (ReferenceEquals(obj, this))
                return true;
            MyClass other = obj as MyClass;
            if (other != null)
              return other.NumberValue == this.NumberValue &&
                     other.StringValue == this.StringValue;
            return false;
        }

This method meets all the conditions of a good Equals, but has the advantage of being pretty fast, faster than the first way I did it anyway. Since the gurus in Redmond recommend the as operator, you’d think that it’s the fastest: wrong! Check it:

        public override bool Equals(object obj)
        {
            if (ReferenceEquals(obj, this))
                return true;
            if (obj is MyClass)
            {
                MyClass other = (MyClass) obj;
                return other.NumberValue == this.NumberValue &&
                       other.StringValue == this.StringValue;
            }
            return false;
        }

Equals with the is operator and then casting is actually the fastest of them all (by about 10% when compared to the as operator). All three methods meet the conditions of a good Equals method, but the least intuitive one – to me at least – has the advantage of being the fastest. And it’s cheap speed, too: you get it just by implementing Equals the same way everytime for every type. You generally want Equals to be pretty fast because it will show up a lot in loops and operations on collections.

My point? Always measure – don’t assume you’re doing things right. It’s good to go back and think about the fundamentals once in a while.

Using the Command Pattern in Windows Forms clients

I’ve been doing a lot of research with the Command pattern lately at work. Here’s what I found: You’re doing yourself a disservice – and your team – if you don’t play the hell outta this thing!

It’s a very powerful design for non-trivial Windows Forms clients. What’s non-trivial? Anything that uses a menu, toolbar or context menu to show perform the same action. In fact it’s so useful and so powerful that I’m a little disappointed that Microsoft didn’t provide something like it in the framework itself. They’ve provided something in Avalon WPF, but I haven’t had the chance to play with it. I know they use it: Visual Studio and Office take advantage of it. (For example, think of how many ways you can copy and paste text.)

What are some of the advantages?

  • Automation – how easy is it to automate tasks in Visual Studio with a macro? Or reassign a keyboard shortcut? Enough said.
  • Completely decouples the UI from the business objects. It’s easy to say this but really hard to do it, especially if deadlines are tight. With commands, you can keep the UI and throw away the business objects or vice versa.
  • Easily allows plugins or addins. Suppose you have a client with a unique look that you want to control, but you want to offer the ability to extend or introduce functionality to third parties. Commands allow that. It’s how Visual Studio Add-ins do it.

There are disadvantages, of course – every design has drawbacks:

  • It complicates the application design. Every pattern complicates application design; they’re used because the benefits outweigh the cost. In fact, they should only be used when the benefits outweigh the costs.
  • Using commands complicate your controls and forms. Suppose you have a button that will be used to execute a command. Do you subclass Button so that it contains a Command and override OnClick? Do you use a normal Button and handle the Click event in the form? Change Button with MenuItem. Repeat questions.
  • Debugging is a little harder. There are features in .NET 2.0 to mitigate this one.

So what does a Command look like? Well, that’s largely up to you. The bare minimum would be one method that would allow you to perform the action associated with a Command:

public interface ICommand

{

void Execute();

}

All of your concrete commands would implement this interface and when the button or menu is clicked, call command.Execute(). Easy, eh? There is a whole lot more you can do with the shared interface, however. In fact, there is a lot of common code that you can take advantage of, which is why I prefer an abstract class, like so:

public abstract class Command

{

public string Key { get; protected set; }

public string DisplayName { get; protected set; }

 

public bool Enabled { get; }

 

public abstract void Execute();

}

All the properties would have an implementation, I leave it out because I’m only concerned with API right now. We still have that abstract Execute() method, but now we have a few properties that deserve some explanation. To further separate the UI from the business logic, if the text that the user sees is associated with the command rather than the UI, then you can easily replace the commands without modifying the UI at all. This is extremely powerful when you start dealing with collections of commands that can all be handled in the same way. So that explains DisplayName.

The Enabled property is fairly obvious: there are times when it would either be impossible or just plain wrong to perform an action; in those cases, you want to disable the UI so that the action cannot be performed. I’ll come back to the Enabled property in a second.

The Key property is for when Commands are grouped together in a collection. It makes sense to store most of your commands globally with the main form in a list or map. We’ll need a way to uniquely identify them so that we can retrieve them later. That’s what the Key property is for. Go to Tools -> Options in Visual Studio, select Keyboard. There you’ll see the keys for the commands in Visual Studio, no pun intended.

Now, by itself, the above Command is very powerful. However, we’re not done. Combine the above with Data Binding in Window Forms and you have to write almost no code at all. Data Binding is a very powerful technology, which you’ll know about if you’re one of the millions of ASP.NET developers out there. In .NET 2.0, Windows Forms apps got the same treatment. I can’t cover data binding in detail in this post, so I’ll just assume you know about it. To really take advantage of Data Binding I have to modify my Command class slightly:

public abstract class Command : INotifyPropertyChanged

{

bool enabled;

string key, displayName;

 

public event PropertyChangedEventHandler PropertyChanged;

 

public string Key

{

get { return key; }

protected set { key = value; }

}

 

public string DisplayName

{

get { return displayName; }

protected set

{

if (displayName != value)

{

displayName = value;

OnPropertyChanged(new PropertyChangedEventArgs(“DisplayName”));

}

}

}

 

public bool Enabled

{

get { return enabled; }

set

{

if (enabled != value)

{

enabled = value;

OnPropertyChanged(new PropertyChangedEventArgs(“Enabled”));

}

}

}

 

public abstract void Execute();

 

protected void OnPropertyChanged(PropertyChangedEventArgs e)

{

PropertyChangedEventHandler handler = PropertyChanged;

if (handler != null)

handler(this, e);

}

}

OK, I gave you the whole implementation. You’ll see that my Command class now implements the INotifyPropertyChanged interface. This allows the data binding code to update the control that has bound to the data when the data changes. You’ll note that DisplayName and Enabled will raise the PropertyChanged event. Therefore you can create a Button, hook it up to a Command with a minimum of code and let the power of data binding deal with turning the command on and off:

Button button = new Button();

Command command = new ArbitraryCommand(“My Data”);

button.DataBindings.Add(“Text”, command, “DisplayName”);

button.DataBindings.Add(“Enabled”, command, “Enabled”);

button.Click += delegate { command.Execute(); };

This technique is very powerful when your business objects change on their own without user interaction.

There are few things you have to note. The DisplayName property will have to be internationalized if your app is interested in the rest of the world, which will make the class more complex. Also, the Command pattern kind of falls down when you need to pass data to the command. You lose the polymorphism of the Command class when you have to either know the specific concrete type or what data must be passed to a particular command.

Still, if you are doing serious Windows Forms apps, you should consider the Command pattern.

Technorati tags: , patterns

Now playing: Santana – Just Feel Better (Feat. Steven Tyler Of Aerosmith)

PowerShell profile folder is different than Monad

This week, Microsoft announced Monad’s product name, PowerShell, as well as releasing an RC1 release (The very good user guide can be found here). There are quite a few changes that you can read about on their new blog: http://blogs.msdn.com/powershell. While I think Monad is just fine for a name, I like PowerShell, too. It’s much better than the other names these dummy marketers came up with. I actually have this image in my head when I think the name PowerShell that I might just blog about, but not in this post. While we’re still on naming, though, msh is a way better name for a file extension that ps1. Oh well.

Because of the name change, the profile folder in My Documents has changed but not to something intuitive, so I figured I’d post about it and hopefully let Google pick it up. The folder name for the betas was My Documents\msh. In this folder, you could put your profile script to customize the shell as well as other scripts. I posted earlier about setting the Visual Studio environment variables via script (btw, I plan on making that script better, stay tuned). That’s where you’d put those scripts.

Since the name change to PowerShell, it’d be natural to think that the folder would have to be renamed from msh to ps1, or ps. You’ll find that that is totally wrong. I think for the same reason that the .msh extension got changed to .ps1, instead of just .ps, the new folder name is My Documents\PSConfiguration.

You’ll also have to rename all of your scripts to .ps1. I tried to come up a quick little command, but I failed and just did it by hand (I only had three). I’m still stuck in the learning curve. Luckily, Peter Provost, who is far more proficient in the new shell, gives us just what we need.

Technorati tags: , , MSH

Now playing: Matthew Sweet – Sick Of Myself

MSBuild Resources

I promised this post for the talk I gave at the Victoria .NET Developers Association Code Camp. This post is a bunch of pointers to good stuff about MSBuild that I collected for my talk.

  • Introduction to MSBuild – this is a great introductory article on MSBuild, which covers the main points of my talk very well.
  • MSBuild BlogI’ve extolled the virtues of the MSBuild blog before. They are probably the best resource for when you rub up on a corner of MSBuild with which you’re comfortable; I’ve found very few of those.
  • AssemblyInfoTask – Here’s a task for controlling the versioning of your assemblies for daily builds and the like.
  • Channel 9 MSBuild Wiki – a good place for quick how-to articles
  • MSDN Main Page for MSBuild – that’s pretty self-explanatory, eh?
  • MSBuild Task Reference – pointer to the default tasks that ship with MSBuild.
  • MSBuild Community Tasks – a great open source project that has a whole host of useful tasks, and integrates superbly with your development environment: a help file, a dll xml doc file, and schema extensions so their custom tasks show up in Intellisense when editing MSBuild files.
  • Microsoft UK Solutions Build Framework – Another collection of 170 custom tasks! Wow.
  • Integrating MSBuild with CruiseControl.NET – integrating MSBuild with everyone’s favourite free .NET continuous integration server.
  • Using MSBuild and ILMerge to Package User Controls for Reuse – I include this as an example of the possibilities that MSBuild’s task model allows you to do without writing any code. Plus, Scott is someone you should be reading.
  • MSBee Beta 2 Home – Target the .NET Framework 1.1 with this toolkit written by MS employees. Haven’t used it yet, but I hear good things. This would be a sweet situation: use VS 2005’s cool features (snippets, templates, etc) and build to 1.1 because you have to support your clients on that framework. I’ll have to try that out sometime. It really shows off the power of the extensibility model.
  • MSBuild Team Blog : Post Summary: MSBuild in Visual Studio – A great series of articles detailing how VS uses MSBuild under the covers. This summary article points to all of them.

Enjoy. If you see any more good ones, leave it in the comments.

Technorati Tags:

Creating Professional Documentation with NDoc and GhostDoc

At work we use NDoc 1.3 for all of our documentation needs. It’s a pretty good tool for the price. I was playing around with the options on an NDoc file and found that some of the default values for some settings aren’t the optimal ones. Here are some that you should change from the default to make your documentation better and your life easier:

  • Set CleanIntermediates to True: this will delete the ndoc_msdn_temp folder after the chm has been created;
  • Set AutoDocumentConstructors to False: if your constructor throws exceptions, say, which you document in the file, they will be missed if this is set to true;
  • Set IncludeAssemblyVersion to True: this one is obvious and you may already be doing it;
  • Set DocumentAttributes to True: this will output the attributes attached to the member in the syntax portion;
  • Set DocumentProtectedInternalAsProtected to True: again, obvious; you don’t want to give away implementation details;
  • Don’t EVER set DocumentExplicitInterfaceImplementations to True; NDoc barfs on that.

Depending on what release you are working on, you may want to set Preliminary to True. This will add “This documentation is preliminary and subject to change.” in red text to every page.

There’s a handy-dandy list of supported tags for NDoc that can be found here: http://ndoc.sourceforge.net/content/tags.htm

They support more than the ones supplied in the IDE for .NET 1.1 and it will make the docs look more like MSDN docs with very little effort on your part.

Also, an awesome tool that takes some of the drudgery out of documentation is GhostDoc. It’s an addin for VS and it’s free; right-click on a method or property and it will parse the name and figure out the proper summary for you. It’s pretty smart when it comes to inheritance documentation and the override methods for things like Object.Equals(). It’s a real time saver. And you can add your own. I’ve added some for BeginXxx and EndXxx methods among others.

One thing it doesn’t do is parse through the method to find Exceptions that are thrown. I’ve added some ReSharper templates to take most of the drudgery out of that task. I’ve found that I don’t throw very many crazy, custom exceptions; I use the standard ones that the framework provides which just beg for templated documentation: any ArgumentException and InvalidOperationException.

Now playing: Odds – Eat My Brain