What does the rest of blogspace say about Version Numbers?

Searching the internet for version numbers is largely pointless: the hits are useless. I’ve tried variations on “version numbers daily build.”

However, I, in my voracious blog reading, remembered a few posts by some bloggers I read so I figured I’d link to them here so they are all in one spot.

  • Jensen Harris’ post explaining Office build numbers. This post is only a couple of weeks old at the time of this post. The Office team uses a date-based build number.
  • The next was the hardest to find because I know he posted about it; turns out he talked about it in the context of another subject, where I thought it was a specific post on the boring subject of version numbers. Wesner Moise has a quick breakdown of CLR version numbers. So quick I’ll quote it here:

    Longhorn use version 2.0.31113.25 of the CLR, whereas the CTP is a much more recent build, 2.0.40301.9. The third number is the build number. It likely indicates the day the build was made (under the format YMMDD). The last number most likely reflects the number of attempts made to stabilize that build. That Longhorn’s 25 is almost triple that of the CTP’s 9 is a good indication of how much more stable Longhorn’s earlier version of the CLR is compared to the CTP. The PDC build used build 30703, which had been prepared for months before PDC.

  • The last link I found was the most appropriate to my query: Suzanne Cook, who unfortunately looks like she stopped blogging last year, has a post on what to do with internal builds versus external builds. She is in the latter camp, it appears. 

Now playing: Headstones – Tiny Teddy

Why Would You Want An Always-Increasing Version Number?

This is a follow up to my last post about version numbers.

There’s a very practical reason for having a monotonically increasing Version Number as a function of time: upgrading installers. If your daily build produces an executable including an upgrading installer, then you definitely want an increasing build number for every daily build. The example in the last post did not have this requirement, but that doesn’t negate that it’s a very good, practical reason to add complexity to the build process.

The benefit you get from an upgrading installer is that when installing the latest and greatest daily build, you don’t have to uninstall the last version. That’s about the extent of my installer knowledge (and the wording I used is probably grossly inaccurate, but who’s going to correct me?).

This requirement is probably only valid if you release an installer. I don’t have much experience with web apps, but my guess is that they are updated in a far less formal manner; the requirement for version numbers for daily builds is not as important. For component developers, released components should definitely have version numbers – that’s just about the only way to determine which one you have. But internal daily builds for components? I’m not convinced you need them.

Now playing: Headstones – Whatchagonnado

Cyclomatic Complexity

Cyclomatic complexity refers to the number of paths through a piece of code. If you want to impress people with fancy science talk, you can refer to this page at Carnegie-Mellon Software Institute. Suppose you had the following method on a class:

1   public bool IsSomeValue(bool foo, bool bar)
2   {
3     int i = 0;
4     if(foo)
5       i += 1;
6     if(bar)
7       i += -1;
8     return i += 2;
9   }

This would have a cyclomatic complexity of four because there are four paths through the code: in this case it is because of the possible combinations of the booleans, but it’s just a little example. We’ve all seen methods way more complex than this. What’s this metric for?

Well, the more paths through the code, the more difficult it is to debug and to test. The harder it is to debug and test, the higher likelihood of bugs. So measuring cyclomatic complexity is a way to find the methods that could potentially be the buggiest in a class. Note, it’s a potential for bugginess, no metric for software is absolute; computer science isn’t a science, you know. For example, suppose you had a giant enum of 20 values that you switched on. That method would have at least a cyclomatic complexity of 20, but is it necessarily that complex? Well, not really. However, if it had a cyclomatic complexity of 20 and there wasn’t a switch statement, alarms should be going off.

Measuring Cyclomatic Complexity is another tool in the toolbox to show you methods that might be troublesome and may require refactoring. This an especially good tool if you have a high code coverage of unit tests. If you’re around 95% percent, complex methods may not jump out at you because of every case is covered by a test. Just because it’s being tested doesn’t mean that the code is as good as it could be.

So what tools are there for measuring Cyclomatic Complexity? I’ve looked at a couple for C#:

  • CCMetrics – This one is good for measuring overall complexity of an assembly. It says right on the site that it ain’t ready for primetime, but it’s just a command line tool, so it’s really easy to get started and it generates an xml file for studying problems in detail. It works on the compiled assembly. The two measurements that I like is code reuse and complexity reuse. The price? Free.
  • devMetrics – This tool integrates into Visual Studio and works on the source code. I’ve only used the free edition, but with the one you pay for, you can make up your own metrics. The output is an html table containing stats on your solution. One of them is Cyclomatic Complexity including max complexity and average complexity for each class. It’s great for quickly identifying problem areas.

There are others that are included in larger toolsets, but for the money, the two above work well for me.

Shipping with Bugs.

There have been a number of items on the ol’ information superhighway over the past week dealing with software quality.

First, once Visual Studio 2005 officially shipped on Monday, a number of high-profile .NET bloggers (WesnerFrans, Roy) posted about bugs that they experienced with the RTM version. Mini-Microsoft summed it up in a couple of posts (here and here).  There was this hint of outrage on a number of blogs that Microsoft would ship software with bugs; many claimed that VS 2005 was a piece of crap: There were some great counter-arguments from Paul Vick and Wesner. They’re right: it’s not crap, but it ain’t perfect.

Then Wired posted an article of the top ten worst software bugs in history. Scott Berkun had a great point on his blog about that: they’re the worst, not because of the errors, but because of the field for which the software was written (i.e. nuclear power plants versus your mp3 player.

I don’t know if Mr Sink wrote his latest article with all of the above in mind, but it’s certainly relevant. He talks about shipping with bugs. Eric, like Joel Spolsky, is a great writer who is able to say exactly what you’re thinking but in a way you’d never be able to. You can only exclaim “Yes! That’s totally it,” then get everyone you know to read it. So that’s what I’m doing.

Go read it: My Life as a Code Economist.

Never, ever, ever, ever, ever ever ever, use the editor in .Text to write your posts

I just spent the last two hours writing an article for all of you about Nullable<T> and the XmlSerializer, but my blog engine prompted me with a login screen when I hit post and the post was subsequently lost. I may appear calm and civilized with my text here, but I’ve been swearing non-stop since it happened.

So here’s a note to myself: Print this out.

Dear Jason, 

Never, EVER use the editor in .Text to write a long post. You will lose it and the time you spent on it. Your 2 readers will lose out as well. They await with bated breath for every post. We both know you don’t post enough to satisfy them, so you must – must – not waste the time you spend on posts and then lose them. Use Word. Save the document. Then paste it in, hit Post.

Thank you,

Jason