Be more efficient. Today: Windows Explorer

June 24, 2009 at 11:10 AMAndre Loker

Just a reminder for myself or for anyone who’s interested, here are some useful tips and tricks to use in and with Windows Explorer that not probably everybody knows about.

Keyboard shortcuts

  • Move back in history: ALT-LEFT
  • Move forward in history: ALT-RIGHT
  • Move one folder up: ALT-UP (seems to work on Vista only)
  • Expand current folder: NUMPAD + or RIGHT
  • Collapse current folder: NUMPAD – or LEFT
  • Recursively expand all subfolders of the current folder (yeah, don’t try that on C:\): NUMPAD *
  • Recursively collapse all subfolders of the current folder (a bit tricky): first NUMPAD – or LEFT, then F5
  • Goto address bar: ALT-D (US systems), ALT-E (German systems – Windows Help says it’s ALT-S, but it won’t work on Vista, at least for me)
  • Toggle fullscreen: F11
  • Auto-completion in address bar: TAB (you can navigate fairly quickly by using ALT-D, TAB and BACKSLASH)


Other stuff

  • Fire up Explorer rooted at a specific path: explorer /root,directory where directory is the desired root directory, e.g. on my system explorer /root,e:\Tools yields
  • By the way, this makes for a useful external tool in VisualStudio:
  • Open Explorer with a specific directory or file selected: explorer /select,Item – e.g. explorer /select,e:\Tools
  • Starting Explorer in a specific directory: explorer directory
  • Not directly related to the Explorer but a pretty cool tool nonetheless is VisualSubst. It lets you mount any directory as a new drive, just like the good old subst.exe but with a nice UI:



If you have more tips and tricks related to the Windows Explorer, feel free to share them, I’ll add them here.


Posted in: Tools

Tags: , ,

Using Nant for Metabase backup on 64bit Windows Server 2003

January 28, 2009 at 4:26 PMAndre Loker

For while now I’ve been using NAnt not only as a build tool but also as the tool running all my backup tasks, such as:

  • database backups
  • Subversion repository backups
  • mail backup
  • website backup

I’ve also used it to create backups of the IIS Metabase using the iisback.vbs script, which works perfectly smooth as long as it is running on a 32 bit Windows.

I’ve been trying to backup the IIS Metabase on a 64 bit Windows Server 2003 server for a while now, but for some reasons I could not make it work. If I tried to call the script directly using the commandline, e.g.

   1: iisback.vbs /backup /s localhost /e something /v NEXT_VERSION 
   2:       /b Metabase123 //E:vbscript

it would ran perfectly fine. However, when executed as a NAnt task I got an error:

Could not create an instance of the CmdLib object.
Please register the Microsoft.CmdLib component.

After digging in the dark for a while I found an interesting forum thread of somebody with a similar problem. So I more or less did what has been proposed in that thread:

  1. I copied cmdlib.wsc over from %WINDIR%\System32 to %WINDIR%\SysWOW64
  2. I also copied isschlp.wsc the same way
  3. I registered cmdlib.wsc and isschlp.wsc using regsvr32 cmdlib.wsc and regsvr32 isschlp.wsc respectively

At that point my NAnt script was happy again and created the backups.

The reason and more trouble

As far as I understand the problem was that NAnt is for some reason running as a 32 bit application. On 64bit Windows boxes, the System32 folder contains the 64bit binaries whereas the SysWOW64 folder contains the 32bit versions. Frankly, this is not the most intuitive naming ever, but it has a reason. If a 32 bit application is running the System32 folder becomes an alias for the SysWOW64 folder, with the effect that for 32 bit applications, Windows looks totally normal (ie. 32 bit). However, this also means that binaries that are found in the unaliased System32 folder are not accessible for 32 bit application. Hence the need to copy and reregister cmdlib.wsc and isshlp.wsc.

Now that I had this one working I directly faced a second problem: the backups of the metabase are stored at %WINDIR%\System32\inetserv\MetaBack. I think you can guess what’s the problem: the folder is hidden for 32 bit applications, which means that I can’t copy the Metabase backups using NAnt, because NAnt only sees the aliased version of System32.

So here’s what I did to solve this issue: I couldn’t find a way to disable the file system aliasing though standard means in NAnt, but I found an API that could help me:

Using the first of those two method disables the aliasing for the current thread. So I wrote a small C# program that first disables the aliasing (by P/Invoking the methods mentioned above) and then copies the backed up files out of the System32 folder into a conventional folder. I could then continue to use NAnt to further process those files (e.g. zipping them, mailing them somewhere – whatever).

Here’s the code of the class that disables/reverts the file system aliasing:

   1: /// <summary>
   2: /// Disables file system aliasing for 32 bit applications
   3: /// on 64 bit systems.
   4: /// </summary>
   5: public class DisableWow64Redirect : IDisposable {
   6:   #region P/invoke
   7:   [DllImport("Kernel32")]
   8:   private static extern bool Wow64DisableWow64FsRedirection(out IntPtr oldValue);
  10:   [DllImport("Kernel32")]
  11:   private static extern bool Wow64RevertWow64FsRedirection(IntPtr oldValue)
  12:   #endregion
  14:   private readonly IntPtr oldValue;
  16:   /// <summary>
  17:   /// Creating a new object disables file system aliasing for the current thread.
  18:   /// </summary>
  19:   /// <remarks>
  20:   /// Use <see cref="Dispose"/> to re-enable file system aliasing.</remarks>
  21:   public DisableWow64Redirect() {
  22:     Success = Wow64DisableWow64FsRedirection(out oldValue);
  23:   }
  25:   public bool Success { get; private set; }
  27:   /// <summary>
  28:   /// Disposes this object and reenables the file system aliasing.
  29:   /// </summary>
  30:   public void Dispose() {
  31:     if (Success) {
  32:       Success = Wow64RevertWow64FsRedirection(oldValue);
  33:     }
  34:   }
  35: }

Granted, this is probably not the best solution one could think of, but it works for me for the moment. If anyone has a better idea, let me know!


Posted in: Windows | Snippets | C#

Tags: , ,

How to enable sound in Remote Desktop sessions on WinServer 2k3

August 26, 2008 at 1:46 PMAndre Loker

On a Windows Server 2003 machine I needed sound to be enabled during remote desktop connections. Here's what I had to do to bring the sound to the client machine.

  1. Install audio drivers on the server, of course
  2. Enable the Windows Audio service on the server.
    1. Open Administrative Tools => Services
    2. Locate the service named Windows Audio
    3. Set the start mode to "Automatic" and start the service
  3. Enable RDP-Tcp Audio Mapping
    1. Open Administrative Tools => Terminal Services Configuration
    2. Select the "Connections" node on the left side
    3. Right click on RDP-Tcp on the right side and select "Properties" (or double click RDP-Tcp)
    4. On the "Client Settings" tab, uncheck "Audio mapping"  (checked items are disabled)
  4. In the Remote Desktop Connection window (client side), enable audio playback on client computer
    1. Show options by clicking "Options >>"
    2. On the "Local Resources" tab, select "Bring to this computer" in the combo box under "Remote computer sound"
  5. Connect to the remote server and you should have sound.

Note: you should only enable sound on the server if you have a good reason, e.g. I edit my DVB-T recordings on the server. Otherwise, leave audio off for stability and security reasons.

Posted in: Windows

Tags: , ,

Load a font from disk, stream or byte array

July 3, 2008 at 11:30 AMAndre Loker

A user asked in the Forums how to load a Font from a Stream or a byte array. An interesting question, I think, because there might be applications that come with their own fonts. The obvious way to make the font available is by installing it to the font collection of the O/S. In that case one could simply use the Font constructor that takes the font family name as its first argument and be happy.

However, there are several good reasons to decide against installing a new font to the O/S:

  • Installing a font requires administrative privileges
  • You don't want to clutter the font collection unnecessarily
  • Your app does not have or need an installer because it's a very simple tool. You neither want to add an installer just for installing the font, nor do you want the user to manually install a font.

Basics: load font from file

Therefore, the first thing we would like is to load a true type font file (ie. a .ttf file) directly from disk, without the need to install it first. To achieve this, the .NET framework provides the class System.Drawing.Text.PrivateFontCollection. The method AddFontFile does just what we want: load a ttf file. The Families property returns an array of all font families that have been loaded so far.

The method below shows a method that loads a font file and returns the first font family:

   1: public static FontFamily LoadFontFamily(string fileName, out PrivateFontCollection fontCollection) {
   2:   fontCollection = new PrivateFontCollection();
   3:   fontCollection.AddFontFile(fileName);
   4:   return fontCollection.Families[0];
   5: }

The returned FontFamily can then be used to construct specific fonts, like this:

   1: PrivateFontCollection fonts;
   2: FontFamily family = LoadFontFamily("TheFont.ttf", out fonts);
   3: Font theFont = new Font(family, 20.0f);
   4: // when done:
   5: theFont.Dispose();
   6: family.Dispose();
   7: family.Dispose();

You can then use the font as usual.

Next level: load font from stream or byte array

If you want to deploy your application as a single .exe you would prefer to load the font from an embedded resource rather than loading it from a separate file. PrivateFontCollection provides a method called AddMemoryFont that we can use for this purpose. Loading a file into memory is a snap, but AddMemoryFont accepts an IntPtr and a length. Therefore we need to do fiddle a bit to get the IntPtr from the byte array. Here's a method that loads a font family from a byte array:

   1: // load font family from byte array
   2: public static FontFamily LoadFontFamily(byte[] buffer, out PrivateFontCollection fontCollection) {
   3:   // pin array so we can get its address
   4:   var handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
   5:   try {
   6:     var ptr = Marshal.UnsafeAddrOfPinnedArrayElement(buffer, 0);
   7:     fontCollection = new PrivateFontCollection();
   8:     fontCollection.AddMemoryFont(ptr, buffer.Length);
   9:     return fontCollection.Families[0];
  10:   } finally {
  11:     // don't forget to unpin the array!
  12:     handle.Free();
  13:   }
  14: }

As you can see, I'm using GCHandle and UnsafeAddrOfPinnedArrayElement to get the IntPtr to the first element in the array. If you prefer to use unsafe blocks, go ahead, it's even shorter:

   1: // load font family from byte array
   2: public static unsafe FontFamily LoadFontFamilyUnsafe(byte[] buffer, out PrivateFontCollection fontCollection) {
   3:   fixed (byte* ptr = buffer) {
   4:     fontCollection = new PrivateFontCollection();
   5:     fontCollection.AddMemoryFont(new IntPtr(ptr), buffer.Length);
   6:     return fontCollection.Families[0];
   7:   }
   8: }

For convenience we provide another overload that accepts a stream:

   1: // Load font family from stream
   2: public static FontFamily LoadFontFamily(Stream stream, out PrivateFontCollection fontCollection) {
   3:   var buffer = new byte[stream.Length];
   4:   stream.Read(buffer, 0, buffer.Length);
   5:   return LoadFontFamily(buffer, out fontCollection);
   6: }

With those methods available we can load a Font for example from an embedded resource:

   1: using (Stream s = Assembly.GetExecutingAssembly().GetManifestResourceStream("Test.TheFont.ttf")) {
   2:   PrivateFontCollection fonts;
   3:   FontFamily family = LoadFontFamily(s, out fonts);
   4:   Font theFont = new Font(family, 20.0f);
   5:   //...
   6: }

Note: the documentation of AddMemoryFont have this to say:

To use the memory font, text on a control must be rendered with GDI+. Use the SetCompatibleTextRenderingDefault method, passing true, to set GDI+ rendering on the application, or on individual controls by setting the control's UseCompatibleTextRendering property to true. Some controls cannot be rendered with GDI+.

While I haven't noticed problems when using memory fonts even when compatible text rendering is deactivated (which is preferable), you might experience problems. In this case you have two options:

  • Enable compatibility mode using SetCompatibleTextRenderingDefault, which is not really desirable as the new GDI text rendering engine is superior to the GDI+ engine.
  • Don't use AddMemoryFont, but extract the font to the temp directory and load it from there using AddFontFile

Update 07/07/2008: It seems that you must not dipose the PrivateFontCollection before you're done with the fonts within it; otherwise your app my crash. I updated the methods above to return the PrivateFontCollection instance. The caller has to dispose the collection after he/she is done using the fonts.

Posted in: Snippets

Tags: ,


June 27, 2008 at 5:01 PMAndre Loker

HTTP methods

As you probably know, HTTP supports several methods that define the nature of the current request. The two most important ones are GET and POST. GET is the primary method to get content (so called entities) from the server such as  HTML pages, images, CSS style sheets etc. The POST method on the other hand is meant to transport entities to the server, for example login credentials or a blog comment. On the server side a POST request often results in an update of certain data (databases, session state).

Both GET and POST can return an entity as a response. For GET this is obvious - it's what the method exists for in the first place. For POST it might sound reasonable in the first place as well, but it brings a pile of problems.

A simple scenario

Imagine you fill in a sign-up form of some web based e-mail service and POST it to the server using a submit button. The server processes the new account and updates its database. Maybe it even logs you in directly. In response of the POST request the server directly shows you a view of your inbox. Here's a diagram of what happens between browser and server:


  1. The browser POSTs form data to an URL called signup.aspx
  2. The server processes the request
  3. The server responds with a status code of 200 (OK) and sends back a view of the new users inbox rendered as HTML

You leave the computer to have a coffee and when you come back 5 minutes later you refresh the page (using CTRL+R or F5 or whatever shortcut your browser uses) to see whether you already have new messages. You are a bit puzzled why this (or a similar) message box appears:


You click on OK and are even more confused as the page that appears says "This user name is already taken" instead of showing your inbox .

What has happened? Remember that the page you saw was the response of a POST request (submitting the sign up form). When you refreshed the page and confirmed to "resend the data" you actually repeated the POST request with the same form data. The server processed the "new" account and found that the user name is already in use (by yourself!), therefore it showed an error. "But wait", you say, "I just wanted the server to refresh the view of my inbox, what have I done wrong? " The answer is: nothing! The problem is that the application abused the POST response to transport an entity back to the client that should have been accessed with a GET request in the first place.

POST related issues

Here are some of the problems that occur if you abuse POST requests to return entities:

1. Refreshing the page results in a re-transmission of the POST data

This is what I described above. Hitting "refresh page" for a reponse based on a POST request will re-issue the POST request. Instead of refreshing what you see this will repeat what you did to reach the current page. This is not "refresh page" anymore, it becomes "repeat last action" - which is most likely not what the user wants. If you see a summary page after you have submitted an order in an online store, you don't want F5 to drop another order, do you?

2. POST responses are hard to bookmark

Bookmarks (or favourites etc.) normally only remember the URL of the bookmarked page (along with some user supplied meta data). Because a POST request transports data in the request body instead as query parameters in the URL like GET does, bookmarking the result of a POST will not work in most cases.

3. POST responses pollute the browser history

If the browser keeps the result of a POST request in it's history, going back to that history entry will normally result in POST data to be retransmitted. This again causes the same issues as mentioned in point 1.


"But I need POSTs to send forms to the server - how can I avoid the problems mentioned above?" you might say. Here's where the POST-Redirect-GET (PRG hereafter) pattern enters the stage.

Instead of sending entity content with the POST response after we processed the request, we return the URL of a new location which the browser should visit afterwards. Normally this new location shows the result of the POST or an updated view of some domain model.

This can be achieved by not returning a result code of 200 (success) but instead returning a status code that indicates a new location for the result, for example 303 ("See other") or 302("Found"/"Moved temporarily"), the latter of which is used most often nowadays. Together with the 30x result code a Location header is sent which contains the URL of the page to which the request is redirected. Only the headers are sent, no body is included.

If the browser sees the 30x status code, it will look for the Location header and issue a GET request to the URL mentioned there. Finally the user will see the body of that GET request in the browser.

The browser-server communication would look like this:


  1. The browser POSTs to signup.aspx
  2. The server updates some state etc.
  3. The response is 302 redirect with a Location header value of inbox.aspx
  4. The browser realizes that the response is redirected and issues a GET to inbox.aspx
  5. The server returns 200 together with the content of the resource.

What do we gain?

  • The page can be safely refreshed. Refreshing will cause another GET to inbox.aspx which won't cause any updates on the server
  • The result page can be easily bookmarked. Because the current resource is defined by the URL a bookmark to this URL is likely to be valid.
  • The browser history stays clean. Responses that have a redirect status code (such as 302) will not be put into the browser cache by most browsers. Only the location to which the response is redirecting is. Therefore signup.aspx won't be added to the history and we can safely go back and forth through the history without having to resubmit any POST data

The drawbacks of POST-Redirect-GET

While it should be clear by now that the POST-Redirect-GET pattern is the way to go in most situations, I'd like to point at the few drawbacks that come along with this pattern.

First of all, redirection from one request to another causes an extra roundtrip to the server (one for the POST request, one for the GET request it redirects to). In this context the roundtrip should be understood as all processing and transmission time that is required and fixed per request, ie. transmission delay, creation and invation of the HTTP handler, opening and closing database connections/transactions, filling ORM caches etc.

If both requests can be handled very quickly by the server this will essentially double the response time. If your roundtrip time is 200ms, using PRG will cause a minimum delay of 400ms between submitting the form and the result page being visible. This issues has to be put in perspective with reality, however. The server will need some time processing both requests, so the percentage of time needed for the roundtrips decreases with the amount of time server processing time takes. The response from the POST itself can be extremely small (few hundred bytes), because only the headers need to be transmitted.

In practice I haven't noticed a real performance problem with PRG. A slow app will stay slow, a fast one won't truly suffer from the extra roundtrip. And besides, if you replace POSTs by GETs where appropriate the effect of PRG will be even less noticeable.

The problem with ASP.NET WebForms

Now that you know about POST-Redirect-GET you are of course eager to use it (at least I hope I could convince you). But as an ASP.NET WebForms developer you will soon run into problems: ASP.NET WebForms is fundamentally based on POSTs to the server. In essence, all ASP.NET web pages are wrapped in one huge <form> element with "method" set to "POST". Whenever you click a button, you essentially POST all form fields to the server. Of course you can redirect from a Button.Click handler. If you do so, you're applying PRG. At the same time you're working quite against the WebForms philosophy, especially the ViewState (which will get lost as soon as you redirect), which will force you to rethink a lot of your application logic. And if you don't rely on all this postback behaviour inherent to ASP.NET WebForms you might as well ask why you're using WebForms in the first place.

This makes clear why a lot of developers (including me) think that WebForms are inherently "broken" (viewstate, ubiquitous postbacks and the hard-to-mock HttpContext are just a few reasons). If you share these concerns but like .NET just as I do, you might want to look at alternate .NET based web frameworks such as Castle MonoRail or ASP.NET MVC.


In situations where you use AJAX the whole PRG issue becomes a new story. AJAX responses don't appear in the history, you wouldn't want to bookmark them and refreshing a web page does not re-issue any AJAX requests (except those fired on page load). Therefore I have no problem with returning entitiest (HTML fragments, JSON, XML) from AJAX POSTs - PRG is not of much use here.


To conclude this article here's a list of some basic rules that have been useful to me:

  1. Use POST-Redirect-GET whenever you can, that is: whenever you process a POST request on the server, send a redirect to a GETtable resource as response. It's applicable in almost all cases and will make your site much more usable for the visitor
  2. Don't POST what you can GET. If you only want to retrieve a parameterised resource it might be completely suitable to use a GET request with query string parameters. Google is a good example. The start page contains a simple form with a single text field to enter the search terms. Submitting the form causes a GET to /search with the search terms passed as the query string parameter q. This can be easily done by providing method="GET" on the <form> element (or just leave out the method attribute, as GET is the default).
  3. POST requests from AJAX are allowed to return entities directly as they don't suffer from the problems like "full" POSTs.

Posted in: ASP.NET | Patterns

Tags: , , ,