Loading Visual Studio 2005\.NET 2.0 SDK Tools (vsvars32.bat) in Monad (MSH)

Scott Hanselman’s recent zeal for MSH has motivated me to take another look at Monad.

I saw the original Jeffrey Snover Channel 9 video and downloaded the beta then; he just about drop kicks you with his enthusiasm, so I had to try it. My first two co-op terms (internships is another term) were spent as a system administrator, so there will always be a soft spot for a powerful command line\scripting platform in my heart. So I took a look, played around, saw that it was WAY too much to learn, and quickly dropped it. Now that Scott has extolled the virtues of Monad in recordable audio format, I figured I’d give it another try.

Scott’s right when he says it’s rough to start with; there definitely is a learning curve and I don’t think I’m out of the dark yet. But I’ve decided to take the first steps by customizing the bejeezus out of it. The first thing I wanted to do was start every instance with the VS/.NET SDK tools set up. I’m not a huge command line guy in my day-to-day work, but I tend to use the .NET SDK tools when I do. So it would be more convenient to load them every time I run MSH.

You, of course, know the “Visual Studio Command Prompt” menu option in the Visual Studio Tools start menu group is merely a call to cmd.exe with a batch file loaded. Well, I figured my first task would be to rewrite that as a msh script (although not strictly necessary – at least, I think – I figured it was a good place to start scripting), then set it be called whenever I start an instance of msh.exe.

So that’s what I did.

See ya.

Why aren’t you leaving?

Oh alright.

The first part, writing the script, was pretty easy. I opened TextPad with a new document and opened the vsvars32.bat file (located in “%PROGRAMFILES%\Microsoft Visual Studio 8\Common7\Tools”) side by side and started converting.

The file in its entirety is provided below. Download it here. It’s not a very complex script and certainly doesn’t take advantage of the power of the msh script engine, but I still took away a lot from doing this script. First, I forgot what a pain in the ass it is to debug scripts if you’re just learning the syntax. I’m sure once I get going, I’ll be more productive with this, but typos really killed me.

I really like setting variables in msh. You’ll note the syntax for setting variables in different scopes makes it very clear to read. For example, $env:Path is the %PATH% environment variable; $script:tools sets the tools variable for the whole script. One gotcha that I’m not used to is that functions have to be declared before they’re used. I know that dates me as a young’un, but whatever, so I’m not old enough to use C so much. JScript and VBScript don’t have that restriction, and that’s where I’ve done most of my scripting. Another thing I like is variable expansion. If you embed the variable name in a string, msh will expand it for you. That’s very handy for writing file paths as you’ll note below. I put in a few different ways of doing it, just because.

The second part of the problem is to load this for every msh.exe instance. I could do what MS does for the Windows SDK and a have a shortcut that calls msh.exe with some command line arguments, notably the script, but that would mean I have to open every instance from the same shortcut; that’s inconvenient, especially if you’re used to, like I am, opening it up from the Run dialog. The best way to do this for my needs (yours may differ) was to set it up in my profile.msh file located at “My Documents\msh\profile.msh”. This is a place for users to set their own preferences for the shell. I found this post about calling multiple msh scripts from profile.msh but it didn’t quite work for me. I set a variable to the full path of the vsvars32.msh file location and then called . $vsvars, which loads the script for me. I think that’s a unix shell command – the ‘.’ – but it’s been so long since I’ve used unix, so I don’t know. Run gacutil or xsd and watch in churn out the usage message. Sweet, huh?

# vsvars32.msh
# This is a re-write of the vsvars32.bat file that is
# installed by Visual Studio 2005. I did this to learn
# msh script. Your mileage may vary. This is not supported.

$script:VsInstall = "${env:ProgramFiles}\Microsoft Visual Studio 8"
$script:tools = "$script:VsInstall\Common7\Tools"
$script:vc = "$script:VsInstall\VC"

$env:VSINSTALLDIR = $script:VsInstall
$env:VCINSTALLDIR = $script:vc
$env:FrameworkDir = "${env:windir}\Microsoft.NET\Framework"
$env:FrameworkVersion = "v2.0.50727"
$env:FrameworkSDKDir = $script:VsInstall + "\SDK\v2.0"

write-host "Setting environment for using Microsoft Visual Studio 2005 x86 tools."

$env:DevEnvDir = $script:VsInstall + "\Common7\IDE"
$script:FxDir = $env:FrameworkDir + "\" + $env:FrameworkVersion

# ------------------------------------------------------------------
# Prepend a directory to a semi-colon delimited list of directories
# I stole this function from the SetEnv.msh script provided by the Windows SDK
# ------------------------------------------------------------------
function PrependToVar
{
param ([string]$list, [string]$newdir)
	return "$newDir;$list"
}

$Path = PrependToVar $env:Path $env:DevEnvDir
$Path = PrependToVar $Path "$script:vc\bin"
$Path = PrependToVar $Path $script:tools
$Path = PrependToVar $Path "$script:tools\bin"
$Path = PrependToVar $Path "$script:vc\PlatformSDK\bin"
$Path = PrependToVar $Path "$env:FrameworkSDKDir\bin"
$Path = PrependToVar $Path $script:FxDir
$Path = PrependToVar $Path "$script:vc\VCPackages"

$Include = PrependToVar $Include "$script:vc\atlmfc\include"
$Include = PrependToVar $Include "$script:vc\INCLUDE"
$Include = PrependToVar $Include "$script:vc\PlatformSDK\INCLUDE"
$Include = PrependToVar $Include "$script:FrameworkSDKDir\INCLUDE"

$Lib = PrependToVar $Lib "$script:vc\atlmfc\lib"
$Lib = PrependToVar $Lib "$script:vc\lib"
$Lib = PrependToVar $Lib "$script:vc\platformsdk\lib"
$Lib = PrependToVar $Lib "$script:FrameworkSDKDir\lib"

$env:Path = $script:Path
$env:Include = $script:Include
$env:Lib = $script:Lib
$env:LibPath = "$script:FxDir;$script:vc\atlmfc\lib"

What does the rest of blogspace say about Version Numbers?

Searching the internet for version numbers is largely pointless: the hits are useless. I’ve tried variations on “version numbers daily build.”

However, I, in my voracious blog reading, remembered a few posts by some bloggers I read so I figured I’d link to them here so they are all in one spot.

  • Jensen Harris’ post explaining Office build numbers. This post is only a couple of weeks old at the time of this post. The Office team uses a date-based build number.
  • The next was the hardest to find because I know he posted about it; turns out he talked about it in the context of another subject, where I thought it was a specific post on the boring subject of version numbers. Wesner Moise has a quick breakdown of CLR version numbers. So quick I’ll quote it here:

    Longhorn use version 2.0.31113.25 of the CLR, whereas the CTP is a much more recent build, 2.0.40301.9. The third number is the build number. It likely indicates the day the build was made (under the format YMMDD). The last number most likely reflects the number of attempts made to stabilize that build. That Longhorn’s 25 is almost triple that of the CTP’s 9 is a good indication of how much more stable Longhorn’s earlier version of the CLR is compared to the CTP. The PDC build used build 30703, which had been prepared for months before PDC.

  • The last link I found was the most appropriate to my query: Suzanne Cook, who unfortunately looks like she stopped blogging last year, has a post on what to do with internal builds versus external builds. She is in the latter camp, it appears. 

Now playing: Headstones – Tiny Teddy

Creating Professional Documentation with NDoc and GhostDoc

At work we use NDoc 1.3 for all of our documentation needs. It’s a pretty good tool for the price. I was playing around with the options on an NDoc file and found that some of the default values for some settings aren’t the optimal ones. Here are some that you should change from the default to make your documentation better and your life easier:

  • Set CleanIntermediates to True: this will delete the ndoc_msdn_temp folder after the chm has been created;
  • Set AutoDocumentConstructors to False: if your constructor throws exceptions, say, which you document in the file, they will be missed if this is set to true;
  • Set IncludeAssemblyVersion to True: this one is obvious and you may already be doing it;
  • Set DocumentAttributes to True: this will output the attributes attached to the member in the syntax portion;
  • Set DocumentProtectedInternalAsProtected to True: again, obvious; you don’t want to give away implementation details;
  • Don’t EVER set DocumentExplicitInterfaceImplementations to True; NDoc barfs on that.

Depending on what release you are working on, you may want to set Preliminary to True. This will add “This documentation is preliminary and subject to change.” in red text to every page.

There’s a handy-dandy list of supported tags for NDoc that can be found here: http://ndoc.sourceforge.net/content/tags.htm

They support more than the ones supplied in the IDE for .NET 1.1 and it will make the docs look more like MSDN docs with very little effort on your part.

Also, an awesome tool that takes some of the drudgery out of documentation is GhostDoc. It’s an addin for VS and it’s free; right-click on a method or property and it will parse the name and figure out the proper summary for you. It’s pretty smart when it comes to inheritance documentation and the override methods for things like Object.Equals(). It’s a real time saver. And you can add your own. I’ve added some for BeginXxx and EndXxx methods among others.

One thing it doesn’t do is parse through the method to find Exceptions that are thrown. I’ve added some ReSharper templates to take most of the drudgery out of that task. I’ve found that I don’t throw very many crazy, custom exceptions; I use the standard ones that the framework provides which just beg for templated documentation: any ArgumentException and InvalidOperationException.

Now playing: Odds – Eat My Brain

Why System.Uri sucks, part 2

Look up irony in a dictionary; do you know what you’ll find?

The definition of the word irony.

That may not help you, how about examples of irony? Alanis Morrisette’s song Ironic – none of the situations in the song are ironic, yet the name of the song is … Ironic; another example of irony is the recommendation, in a book about public APIs, of a class that has a pretty bad public API: System.Uri.

I wrote last time about System.Uri’s inability to parse mailto-like URI schemes. This time I’m going to talk about what you have to go through to if you want to remedy Uri’s problem: I’ll discuss the API exposed to inheritors.

Go to the documentation on MSDN about System.Uri; browse the members of Uri. It’ll show you one protected method in the list: the static EscapeString(). However, inherit from System.Uri, then type “override” followed by a space in Visual Studio 2003. You’ll see a number of methods that you can override: Canonicalize(), CheckSecurity(), Escape(), Parse(), and Unescape(). I believe that the reason they are not documented in MSDN is because they never should have shipped. Too bad Abrams and Cwalina didn’t point this mistake out when they advised using this class, like they did throughout the book with other classes.

Documentation probably wouldn’t make much difference because these methods aren’t very useful. They take no arguments, and return nothing (with the exception of Unescape() which takes a string and returns a string). There is no protected property exposing the string given to the constructor, so overriding the methods won’t help you. However, there’s nothing to stop a malicious, or incompetent, coder from overriding them and passing an instance of their class to your API. Will it compromise your system? I’m no security expert, but I don’t think so: they won’t be able to steal passwords with it. But they could take the system down: override Unescape() to return null and you’ll get a NullReferenceException.

Some further suckiness for this API is that they violate one of the rules that Abrams and Cwalina recommend in the Framework Design Guidelines book: don’t call virtual methods from constructors. Run the code below and you’ll see the order of execution.

using System;

public class MyUri : Uri

{

public MyUri(string uriString) : base(uriString)

{

Console.WriteLine(“In ctor”);

string escaped = EscapeString(uriString);

Console.WriteLine(“escaped = {0}”, escaped);

}

protected override void Canonicalize()

{

Console.WriteLine(“In Canonicalize()”);

}

protected override void CheckSecurity()

{

Console.WriteLine(“In CheckSecurity()”);

}

protected override void Escape()

{

Console.WriteLine(“In Escape()”);

}

protected override void Parse()

{

Console.WriteLine(“In Parse()”);

}

protected override string Unescape(string path)

{

Console.WriteLine(“In Unescape(path = {0})”, path);

return path;

}

}

class Program

{

static void Main()

{

new MyUri(pres:jason@example.com;param= pvalue”);

Console.ReadLine();

}

}

Yields the following output:

In Parse()

In Canonicalize()

In Escape()

In Unescape(path = ::-1pres:jason@example.com;param= pvalue)

In ctor

escaped = pres:jason@example.com;param=%20pvalue

Hmph. Not so hot.

They changed a lot in .NET 2.0, of course, so you can now override how to parse URIs that System.Uri doesn’t know about without extending System.Uri. Oh, and they deprecated the above methods, so you’ll get compiler warnings. I’ll talk more about to override UriParser next time. (I won’t follow the same title scheme next time, just to keep it interesting. So pay attention! J)

XmlSerializer, Xsd.exe, Nullable<T> and you.

At work, I’ve been using Xsd.exe and XmlSerializer in V1.1 a lot lately. There are a number of things that aren’t satisfactory about both of them, but this post only talks about a few of them.  Since .NET 2.0 was just released, I began trying out a few things to see if they fixed things. You’ll see that they’ve fixed some issues, but there is still a lot left that they can do.

 

One of the big things Microsoft has said about .NET, and it’s true, is that it has built-in support for XML. There is a lot of support, which is handy, because of Microsoft’s marketing message, when .NET came out, was all: “XML! XML! XML! Use XML everywhere.” I detest working with XML, but there are tools that make it bearable like the XmlSerializer. There are times when you have no choice to use XML, but with the XmlSerializer, you can hide most of the XML and use normal classes. Likewise, Xsd.exe is pretty handy; a Swiss Army-knife like XML tool, it can take an assembly and produce a schema of the types; it can take an XML file and generate a schema based on that file; give it a schema to generate C# or VB classes or a strongly-typed DataSet; give it a kitchen sink, it’ll do something.

 

I use it to generate class files from a schema that typically is beyond my control. It generates some truly heinous code for you, embarrassing code; if the code were a person, it’d wear jogging pants to a wedding, laugh at the worst jokes and have terrible teeth.

 

Suppose I have an XML schema that defines a log file. You can click here to view it. It defines for me XML files like so:

 

<?xml version=1.0 encoding=utf-8?>

<log xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance

     xmlns:xsd=http://www.w3.org/2001/XMLSchema 

     xmlns=http://www.jasonkemp.ca/logentry.xsd

     name=MyLog>

  <logEntry category=1>

    <data>This is an entry in my log.</data>

    <timestamp>2005-10-31T20:22:35.75-08:00</timestamp>

  </logEntry>

  <logEntry category=2>

    <data>

This is another entry taking advantage of the fact that I don’t need a timestamp

</data>

  </logEntry>

</log>

 

Although the details of the schema aren’t important, there are two things I’d like to point out: Both category attributes (which are typed to long, an XML Schema-way of saying Int32) and timestamp elements are optional. Keyword: optional. You’ll see why in a second. Like I said earlier, I use Xsd.exe to generate class files from a schema. So if I pass that mother through the tool, I’ll get C# code on the other end.

 

Click here for the code generated by Xsd.exe V1.1.

Click here for the code generated by Xsd.exe V2.0.

 

You’ll see in the V1.1 file that what you get is quite appalling: public fields (aaaaaagggh), incorrect casing, etc. You should feel compelled to take what the tool generated, and add to it so it doesn’t suck so much. In this contrived case, I’d probably save some browsing around in a command shell by writing the whole thing myself, however, once the schema gets large enough (i.e. lots of complex types), then modifying what the tool gives you will save you some time. With tools like ReSharper, it’s pretty easy to add properties and constructors to make the type more usable.

 

Contrast that with the 2.0 version: properties are there now, but they still don’t take advantage of the XmlElementAttribute overload that will take a tag name. The classes are partial and liberally sprinkled with oodles of design-time attributes. These attributes are useless for my scenario, but may be used for some of the other scenarios that Xsd.exe supports. (I typically use the tool, keep the source files, and throw away the schema.)

 

However, note that in both files, there is a pattern for value types. This is what I really want to talk about. Remember that I said the schema defined the timestamp element and the category attribute as optional? In the generated class files, these values are represented by value types. And how do we represent value types that don’t have a value set? Not elegantly, for certain. So how does the Xsd.exe tool do this? Consider the category attribute; the tool generates this code (kinda, I had to make it better):

 

    private int category;

    [XmlAttribute(“category”)]

    public int Category

    {

       get { return this.category; }

       set { this.category = value; }

    }

 

    private bool categorySpecified;

    [XmlIgnore]

    public bool CategorySpecified

    {

       get { return this.categorySpecified; }

       set { this.categorySpecified = value; }

    }

 

In order for the XmlSerializer to know that this optional property has a value, there is an additional property, CategorySpecified, to tell the serializer that there is indeed a value. If it’s true, then there is a value, otherwise, there isn’t. The serializer uses this when both serializing and deserializing. When serializing, if the XxxSpecified values are false, then it won’t serialize Xxx property. This is good, because if there are lots of optional elements, we want the XML to stay lean to save bandwidth. However, as a type author, I don’t want this: the type is harder to use, because now a user of my type will have to set two properties to set a value or read two properties to get a value. Then they’ll curse my name and my future children for putting them through such torture.

 

As a way to get around it, I change the property implementation like so:

 

    public const int SentinelValue = 1;

    private int category;

    [XmlAttribute(“category”)]

    public int Category

    {

       get { return this.category; }

       set

       {

          this.category = value;

          this.categorySpecified = this.category != SentinelValue;

       }

    }

 

    private bool categorySpecified;

    [XmlIgnore]

    public bool CategorySpecified

    {

       get { return this.categorySpecified; }

       set { this.categorySpecified = value; }

    }

 

The bool is still there, because we need it for the XmlSerializer as mentioned above, however, now, programmers only have to set the Category property. They now have to know about a “no-value” value, but that can be documented. This method works even better if only a range of values are valid, which can be enforced through range checking and exceptions. If that is the case, the choice of “no-value” value is much easier.

 

With .NET 2.0, we get a host of new programming toys to play with. One of the less glamorous is nullable types. Nullable<T> is a generic value type that enables us programmers to express the “value type without a value” more succinctly. Nullable<T> will wrap the Xxx and XxxSpecified into one value type and you can check for null like a reference type. C# has some syntactic sugar to make them easier to use:

 

         int? i = null;

         Console.WriteLine(i == null); //prints true

         Console.WriteLine(i.HasValue);//prints false

 

which is the equivalent as saying:

 

   Nullable<int> i = null;

         Console.WriteLine(i == null);//prints true

         Console.WriteLine(i.HasValue);//prints false

 

They’re slower than using the real value type, but that’s an implementation detail. I’m no database guy, but I think it is equivalent to DB NULL for a field (correct me if I’m wrong). So working with the XmlSerializer like I’ve been, and watching the new framework developments unfold, a couple questions popped into my mind: Would it be possible to remove those XxxSpecified properties and just use Nullable types instead? Would the XmlSerializer treat them as equivalent, since, semantically, they are? Well, let’s find out. First, we’ll remove the XxxSpecified properties, then we’ll change the file generated so that both the category attribute and the timestamp element are nullable types:

 

    private int? category;

    [XmlAttribute(“category”)]

    public int? Category

    {

       get { return this.category; }

       set { this.category = value; }

    }

 

    private System.DateTime? timestamp;

    [XmlElement(“timestamp”)]

    public System.DateTime? Timestamp

    {

       get { return this.timestamp; }

       set { this.timestamp = value; }

    }

 

If we try to serialize an instance of this, we get the following exception nested in like three InvalidOperationExceptions (a quirk of the XmlSerializer) courtesy of the totally unhandy Exception Assistant (seriously, that’s the next Clippy): Cannot serialize member ‘Category’ of type System.Nullable`1[System.Int32]. XmlAttribute/XmlText cannot be used to encode complex types.

 

Bummer.

 

Well let’s see if it will work with elements; XmlElementAttribute can handle complex types. Change the file so that Category is no longer a nullable, and try to serialize it. We get the following XML:

 

<?xml version=1.0 encoding=utf-8?>

<log xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance

     xmlns:xsd=http://www.w3.org/2001/XMLSchema 

     xmlns=http://www.jasonkemp.ca/logentry.xsd

     name=MyLog>

  <logEntry category=1>

    <data>This is an entry in my log.</data>

    <timestamp>2005-10-31T22:37:26.140625-08:00</timestamp>

  </logEntry>

  <logEntry category=2>

    <data>

       This is another entry taking advantage of

       the fact that I don’t need a timestamp

    </data>

    <timestamp xsi:nil=true />

  </logEntry>

</log>

 

Open this bad boy in VS 2005 and watch the XML validator complain that the timestamp element is invalid, that it cannot be empty.

 

Total bummer.

 

Looks like my questions are answered in the negative. Nullable types are not supported by the XmlSerializer. However, since they were a late addition and a change was made regarding them late in the game, I’ll forgive them.

 

Besides, they should have something to do for .NET 3.0. 😉