NDepend: code metrics at your service

July 8, 2008 at 10:07 AMAndre Loker

If you ever wrote code for a non-trivial project chances are that from time to time you stop an think: "I don't know, but I have the feeling that the code is not really clean/too complex/[insert adjective here that makes you feel bad about your code]". Chances are even that you did not had these thoughts - but your source code indeed was not really clean, too complex or what not. While the latter situation is certainly the worse of the two, both situation make clear that we need means to quantify the quality of our code. And how do we quantify things? By attaching numbers to it, of course. While a statement such as "80% of my code is crap, I think" is certainly a quantification (one which is not applicable in practice, I hope though), we are looking for a tool that can do the math for us and tell us everything we want to know about our code...

... and here comes NDepend!

NDepend is an incredibly versatile tool that can help us improving our code base. The tool analyses project assemblies and source code regarding a multitude of metrics. NDepend can create static reports containing the results in tabular and graphical form, but it also provides an interactive tool (Visual NDepend) which allows us to drill down into assemblies, namespaces and types in virtually every possible way.

First of all, let's realize why it is so useful to have a tool like NDepend at hand:

Improve Communication

Communication is extremely important if you are developing software in a team. One reason why there are catalogues of design patterns is the fact that they introduce a vocabulary that developers get used to. If I talk about abstract factories, commands and strategies, my colleagues know what I mean.

Using NDepend extends the developers' vocabulary and enriches the way in which developers can communicate. This can be a dialogue between two developers: A: "Hey, this type has high efferent coupling, we need to have a look at it" - B: "You're right, it also has a high lack of cohesion value" - A: "Looks like we should concentrate our next refactoring session on this type..." - B: "Absolutely!" - A: "... but first have a cup of coffee :-)" [note: the last statement is independent of any third party tools]

Track progression and evolution

NDepend is capable of comparing two builds of the same project. This allows us to quantify how the quality of a project evolves. For example, code refactoring should generaly lead to code that is less complex (for example in terms of "number of lines per method" or cyclomatic complexity). By comparing a build before refactoring with one after refactoring you can track how effective your refactoring session was.

Verify development guidelines

NDepend can help us enforcing guidelines that have been agreed upon. For example, you might define that method should not have more than X lines (or Y IL instructions) or that methods with more than 5 lines of code should have at least 20% comment. Checking those guidelines is easily done with NDepend.

Improve code quality

This is, of course, the ultimate goal of all of us - at least I hope :-) Having the numbers (ie. metrics) is one thing, taking consequences from those numbers is the other thing. The numbers (and graphs) NDepend gives us can help us spot places in the code that can be improved. Places which we might have overlooked otherwise. This gives us very concrete chances to improve our source code.

Some basic metrics

Before we go into detail on NDepend, here are some of the more advanced metrics that we will deal with:

  • Afferent coupling (Ca)
    This metric desribes the number of types or methods from outside of the current assembly that use a given type or method. The higher this value, the more important the given type or method is to users of the assembly.
  • Efferent coupling (Ce)
    This is the counterpart of Ca: it describes the number of assembly external types/methods that a specific type/method uses. A high value indicates that the specific type/method is very dependent on the external assembly.
  • Relational cohesion (H)
    A metric that describes how strong the types within a single assembly are related to each other. Generally, types within an assembly should be strongly related, but not too strong.
  • Instability (I)
    This describes how sensitive an assembly is regarding changes ion assemblies it depends on. It is measured as the quotient of efferent coupling (Ce) and total coupling (Ca+Ce).
  • Abstractness (A)
    Describes the ratio of abstract types in an assembly.
  • Distance from main sequence (D)
    Instability and abstractness should be in a certain balance. With my own words, I would describe this balance like that: an assembly with high abstractness should be stable as it is most likely used as an input assembly for other assemblies. If it were instable, it would be likely that it has to change sooner or later and this change would ripple through all assembly that depend on this assembly. On the other hand, a very concrete assembly (low abstractness) is likely to be at the end of a dependency graph, that is, almost no assemblies depend on it. It can and will therefore be quite instable.
  • Lack of cohesion (LCOM)
    In a coherent class, most of the methods will deal with most of the fields of that class. If you find that many methods in the class deal only with a subset of the fields it might be an indicator that the responsibility of the class is too broad and the class should be split.
  • Cyclomatic complexity (CC)
    This metric describes how many pathes a method has. The control flow in a method branches at every conditional statement, loop and other statements. A method with a high CC is hard to maintain.

Visual NDepend

Now that we are convinced that metrics are a Good Thing™  let us have a look at what NDepend brings along.

The NDepend package comes with two programs: the console runner (NDepend.Console.exe) and the graphical user interface (Visual NDepend). The former will be mostly used in automatic builds. To get in touch with NDepend let's stick to the GUI.

User interface styles

Visual NDepend supports to styles:

  1. the "Menu & Toolbar" style -  this is a look and feel comparable to MS Office 2003
  2. the "ribbon" style - this style uses the tabs & ribbons look and feel that you know from MS Office 2007

Here's what you get after you fire up VisualNDepend.exe. To the left, the "Menu & Toolbar" style, to the right the "ribbon" style:


As you see, both versions look very pleasing. The UI of Visual NDepend is extremely polished, certainly among the most polished UIs of any of the tools I use. Personally I prefer the ribbon style - it's well arranged and I can find everything quickly.

You can change between the two styles in the options:


Hint: to reduce the amount of place the ribbons take, double click on the tab header. The ribbons will then disappear:


A single click on a tab header will show the ribbon temporarily, a double click restores the view back to normal. This is useful if you need as much space as possible, e.g. when you're analysing a solution.

Creating a project - a simple example

imageVisual NDepend supports two operation modes which only differ in the fact whether you explicitly create a project file or not: if you just want to do a quick analysis, simply select the menu point "Select .NET assemblies". This will allow you to perform the analysis without creating an explicit project file.The other option is to create an explicit project. This is of course recommended if you need to perform the analysis more than one time (eg. in continuous builds). image

Let's just create a new project. You only need to name the project and enter a location for the project file (an xml file). I really appreciate the simplicity here. I don't like it if a program requires you to make a lot of decisions when creating a project.


After the new project is created you need to add assemblies that NDepend can analyse. To the left you have a list of "Application assemblies". Those assemblies are the ones that are compiled from the source code in the project. To the right there's a list of "Tier assemblies". These are the assemblies that your application assemblies reference, for example mscorlib, the System.* assemblies or other third party libraries. The separation between application assemblies and tier assemblies is extremely useful. Most likely you'll only want to analyse your own assemblies and their dependencies to the tier assemblies - there's no need to analyse the cohesion of classes in System.Core.dll.

To add application assemblies either drag and drop them from the Explorer to the application assemblies list or use "Add Assemblies of a Visual Studio solution" to use a .sln file to look up the project assemblies. The "View folders" button allows you to inspect and add folders from which application and tier assemblies should be loaded. After adding some application assemblies, my screen looks like this:


You can use the tabs on the left side to edit additional properties of your project, for example if and how to compare your project to an earlier build, where to put the report files and what to show in your report.

image image

imageAfter we have set up the project, we're ready to go: it's time to run the analysis! NDepend will start analysing your assemblies and generating the report files. The generation process does not take to long, about 20 seconds for a medium sized project on a decent machine.

Result windows

This what you will see after running the analysis;


Class browser

On the left side you have a class browser which shows the assemblies, namespaces, types and members of your project. Application assemblies are black, tier assemblies are blue. If you hover over a type or member it is selected in the metrics window and the Info window displays the metrics of the selected element (see the description of the metrics window).


The metrics window visualizes the relative and absolute size of assemblies, namespaces, types and methods in terms of lines of code, number of IL instructions etc. This allows us to easily pinpoint the most important types etc. at a glance. If you hover with the mouse over one of the squares it is highlighted, the metric value is shown and the Info window in the bottom left displays all metrics for the selected square:


There is a little issue with the Metrics window, though. While hovering or clicking a square updates the Info window, the selection is not fixed. This means that as soon as the mouse leaves the selected square, the Info window will be either empty (if the mouse does not hover a square) or reflects the element that is currently under the mouse. It would be better if a single click on an element in the Metrics window would fix the selection. Clicking on an element in the Class Browser by the way does fix the selection.

Double clicking a member will launch Visual Studio and open the appropriate file. Cool!


The Info window shows metrics for the selected element in the Metrics window or in the Class Browser: number of IL instructions, number of lines, number of lines with comment, percentage of comments, cyclomatic complexity etc.


This window displays the dependencies between the assemblies and types in our project. Starting on the Assembly level this tool allows us to drill down to deeper levels (namespaces, types, members) to detect dependencies between on these levels. Again, NDepend displays application assemblies and tier assemblies differently.

Application assemblies are shown in a triangulated matrix: all app assemblies can potentially be used by other assemblies and use other assemblies at the same time. For example: in this test project, 6 methods in the assembly Vanilla.Web.Monorail together use 17 members of the assembly Vanilla.Web.

image image


For tier assemblies only one direction is displayed, that is how they are used by app assemblies. For example, we can see that Vanilla.Web.MonoRail uses 68 types of the Castle.MonoRail.Framework assembly:


A single click on any of the squares is meant to show you a dependency graph. As of now this does not work on 64bit platforms. This issue comes from an incompatibility of the used graph rendering library with 64 bit systems. Patrick Smacchia promised that this problem will be taken care of in one of the next versions.  In the current version of NDepend this issue is not present anymore, dependency graphs work like a charm on 64 bit platform. Read more.

I will show an example dependency graph when I come to automatic builds.

By clicking on one of the "+" buttons on the left side or the top side of the matrix you can dig down to lower levels: namespaces, types, members. This gives you endless possibilities of determining dependencies.


If you want to focus on a specific dependency, double click the corresponding square and Visual NDepend will "zoom in" into this dependency:


Let's leave the Dependencies window alone for now - it's possibilities are countless, just play with it!

CQL Queries

Let me put it straight: this feature is just awesome! NDepend spits out a lot of metrics on its own, but it also gives you a powerful query language that you can use to gather almost any information about your source code that you like.

CQL (Code Query Language) is a query language similar to SQL - which is the first cool thing as most of us are used to SQL. Using CQL you can query against a large set of metrics. Have a look at the CQL specifications to see how complex the query language is.

To give you an example of a simple CQL query:

   1: SELECT TYPES WHERE NbFields > 6

This query returns all types with more than 6 fields. Easy, hm? Another example: methods that are potentially unused:

   2:  MethodCa == 0 AND            // Ca=0 -> No Afferent Coupling -> The method is not used in the context of this application.
   3:  !IsPublic AND                // Public methods might be used by client applications of your assemblies.
   4:  !IsEntryPoint AND            // Main() method is not used by-design.
   5:  !IsExplicitInterfaceImpl AND // The IL code never explicitely calls explicit interface methods implementation.
   6:  !IsClassConstructor AND      // The IL code never explicitely calls class constructors.
   7:  !IsFinalizer                 // The IL code never explicitely calls finalizers.

Taking the queries a step further, you can define constraints using CQL which can be used to express design guidelines or rules. For example if your design rule is to not have methods with more than 20 lines of code, you can express this constraint like this:

   1: WARN IF Count > 0 IN SELECT METHODS WHERE NbLinesOfCode > 20

When you analyse your project NDepend will generate a warning for all methods that have more than 20 lines of code (you might want to refactor those methods).

In the CQL window you can group the queries. NDepend comes with a standard set of useful queries so you don't have to write everything from scratch.


The CQL query editor is - like the rest of the application - well polished. It provides syntax highlighting and code completion:


By the way, the CQL is constantly extended. New metrics are added in almost every new version of NDepend.

Here's a screen shot of the query result window showing types with more than 20 methods:


While CQL is already very powerful, I've been missing some features:

  • aggregation - for example I'd like to calculate the max, min and average number of lines of code per method (update: while aggregates are not queryable, the query result window shows some aggregated values, see the screen shot above)
  • comparing metrics - for example I cannot select methods with too many IL instructions per line (like "SELECT METHODS WHERE NbILInstructions  > (NbLinesOfCode * 10)"); CQL won't allow me to compare NbILInstructions with anything other than integer numbers.

But all in all, CQL is a great idea and a powerful language. It is what makes NDepend such a versatile tool.

With this I'll conclude this short overview of Visual NDepend. The programs contains heaps of other features which you should discover for yourself.

The HTML report

Where Visual NDepend is used to setup a project and analyse it interactively, the HTML is meant to represent the state of a project in a static and concise way. The analysis data that NDepend generates is stored in an XML files. This has the advantage that you can simply use XSLT to transform the result into HTML. This is exactly what NDepend does to generate the HTML report. In Visual NDepend you can select to either provide your own xsl file or use the default transformation that NDepend comes with. The latter is certainly useful in most cases. If you need more control, go ahead and build your own xsl transformation file. This fits perfectly into NDepends philosophy: provide a useful default set of functionality, but be open for extensions!

So, what does the default report show:

  • General application metrics
    • lines of code
    • number of IL instructions
    • number of lines with comment, percentage comments
    • number of assemblies, types, classes, interfaces, structs,  etc.
    • Percentage of public types and methods etc.
    • Average number of fields per type, method per type etc.
    • ...
  • Metrics per assembly
    • LOC, number of IL instructions, ...
    • coupling metrics (Ca, Ce, relational cohesion, instability, abstractness, instability-abstractness-balance)
  • Assembly dependencies
  • CQL query & constraints results
    • Warnings for constraints that have failed
  • Type metrics
    • LOC, number of IL instructions, ...
    • coupling metrics (Ca, Ce, lack of cohesion ...)
    • cyclomatic complexity
    • Number of directly and indirectly derived classes, depth in inheritance tree
  • Type dependencies (initially not enabled)
    • Defines which types depend on which types.

The report also contains a dependency view (as in the Metrics window in Visual NDepend), a dependency graph (again, no 64bit support) and graph that show the balance between abstractness and stability.


(In the example project, the assemblies seem to be quite instable)

NDepend in automatic builds

You will most likely want to have NDepend generate a report during automatic builds; it's an invaluable tool to define metrics for code quality and to enforce design guidelines. NDepend comes with a command line tool (NDepend.Console.exe) that can be integrated into the build process. The command line tool is held simple: it simply uses a project file that you generated with Visual NDepend beforehand. While this makes it easy to configure an NDepend project at a central place, it has some drawbacks. NDepend stores only absolute paths, for example to folders that contain tested assemblies or to a previous build you want to compare the current build to. Update: the previous sentence is not true, NDepend supports a relative path mode. I simply overlooked the option the whole time. It can be found under Project properties => Code to analyze => Relative path mode:


While you can override the input folders and output folders with command line flags (/InDirs and /OutDir), other options in the project file cannot be overridden. This could cause trouble if you have a dedicated build server.

NDepend ships with an xslt for CruiseControl.NET and build tasks for nant and MsBuild. I haven't used any of the build tasks. I simply used the <exec> task in nant. Here's an example from one of my build files:

   1: <property name="ndepend.project" value='"${root.dir}\NDependProject.xml"'/>
   2: <property name="ndepend.outdir" value='${reports.dir}\ndepend'/>
   3: <property name="ndepend.indirs" value='"${build.dir}"'/>
   4: <property name="ndepend.indirs" value='${ndepend.indirs} "C:\Windows\Microsoft.NET\Framework\v2.0.50727"'/>
   5: <property name="ndepend.indirs" value='${ndepend.indirs} "C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0"'/>
   6: <property name="ndepend.indirs" value='${ndepend.indirs} "C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5"'/>
   8: <exec program="NDepend.Console.exe"
   9:     commandline='${ndepend.project} /InDirs ${ndepend.indirs} /OutDir "${ndepend.outdir}"'/>

Not too hard, if you ask me. The xsl file provided for CruiseControl creates a report that is similar to the HTML report you get when analysing using Visual NDepend, so I won't go into detail here. However, I promised you to show an example of a depency graph, so here we go:


Based on a blog post of Robin Curry I further improved ccnet integration, it now looks like this:


I needed to update Robin's XSL file to match the current version of NDepend. I plan to write a separate article on this "advanced" ccnet integration, so stay tuned!

Documentation, help, support

One thing I absolutely have to mention positively is the amount of help you get from NDepend. NDepend offers a plethora of tutorials (in video and text form), definition of all metrics, an in-depth specification of CQL, and a massive amount of Tips and Tricks. It is not often that you get that much of support!


NDepend licenses are available starting from 299€ (excl VAT), with a massive discount depending on the number of licenses (down to 179€/license if you order more than 20 licenses). Furthermore Enterprise licenses are available on demand. See the purchase page for details.


Wow, that was a long article, wasn't it. Still I could only show you a fraction of the functionality NDepend has to offer. The cool thing is that you can do whatever fits your needs thanks to the extremely flexible and extensible design using CQL. Visual NDepend is a great user interface which makes analysing a project interactively easy fun and interesting. Integrate NDepend into your build process and you have heaps of metrics that you can use to quantify the quality of your code. The price is absolutely adequate.


  • Extremely versatile and extensible, thanks to CQL
  • Pinpoints problematic areas in your code
  • Quantifies code quality - get rid of "I have the feeling that this and that piece of code is not optimal"
  • Introduces a whole new language to the communication between developers
  • Visual NDepend as a great GUI
  • Large amount of tutorials
  • Useful set of metrics to start with, extensible if needed
  • Very convincing value for the money!

Cons and issues

  • Dependency graphs not supported on x64 machines as of now
  • CQL lacks some possibly interesting features (aggregates, comparison of metrics)
  • NDepend.Console.exe has a limited set of parameters. It would be nice to be able to provide more options instead of relying on project files
  • Project files stores mostly absolute paths  Update: not true, NDepend supports a relative path mode.
  • An HTML report is always created, even in CI scenarios, where the XML files would have been enough.

Granted, none of the issues stated above are show stoppers. All in all there's no doubt that NDepend is an excellent tool. I can wholeheartedly recommend it to any developer who wants to improve the quality of his/her code.

Update 07/08/2008:

Patrick has just published a post in his blog in which he compares NDepend to other tools. I especially like the comparison to tools like Resharper or CodeRush:

I like to think that what tools such as ReSharper or CodeRush are doing to your code at micro level (i.e methods' body structuring), NDepend does it at macro level (i.e class, namespace, assembly structuring). Hence, as a developer I personally use both kind of tools to automatically control every aspects of the code base I am working on.

@Patrick: thanks for mentioning this post!

Things I updated:

  • Rectified statement regarding absolute paths. NDepends does support a relative path mode
  • Added screen shot of query result window and mentioned that aggregates are shown in that window
  • Added link to Patrick's blog

Posted in: Tools

Tags: , , ,

Load a font from disk, stream or byte array

July 3, 2008 at 11:30 AMAndre Loker

A user asked in the Gamedev.net Forums how to load a Font from a Stream or a byte array. An interesting question, I think, because there might be applications that come with their own fonts. The obvious way to make the font available is by installing it to the font collection of the O/S. In that case one could simply use the Font constructor that takes the font family name as its first argument and be happy.

However, there are several good reasons to decide against installing a new font to the O/S:

  • Installing a font requires administrative privileges
  • You don't want to clutter the font collection unnecessarily
  • Your app does not have or need an installer because it's a very simple tool. You neither want to add an installer just for installing the font, nor do you want the user to manually install a font.

Basics: load font from file

Therefore, the first thing we would like is to load a true type font file (ie. a .ttf file) directly from disk, without the need to install it first. To achieve this, the .NET framework provides the class System.Drawing.Text.PrivateFontCollection. The method AddFontFile does just what we want: load a ttf file. The Families property returns an array of all font families that have been loaded so far.

The method below shows a method that loads a font file and returns the first font family:

   1: public static FontFamily LoadFontFamily(string fileName, out PrivateFontCollection fontCollection) {
   2:   fontCollection = new PrivateFontCollection();
   3:   fontCollection.AddFontFile(fileName);
   4:   return fontCollection.Families[0];
   5: }

The returned FontFamily can then be used to construct specific fonts, like this:

   1: PrivateFontCollection fonts;
   2: FontFamily family = LoadFontFamily("TheFont.ttf", out fonts);
   3: Font theFont = new Font(family, 20.0f);
   4: // when done:
   5: theFont.Dispose();
   6: family.Dispose();
   7: family.Dispose();

You can then use the font as usual.

Next level: load font from stream or byte array

If you want to deploy your application as a single .exe you would prefer to load the font from an embedded resource rather than loading it from a separate file. PrivateFontCollection provides a method called AddMemoryFont that we can use for this purpose. Loading a file into memory is a snap, but AddMemoryFont accepts an IntPtr and a length. Therefore we need to do fiddle a bit to get the IntPtr from the byte array. Here's a method that loads a font family from a byte array:

   1: // load font family from byte array
   2: public static FontFamily LoadFontFamily(byte[] buffer, out PrivateFontCollection fontCollection) {
   3:   // pin array so we can get its address
   4:   var handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
   5:   try {
   6:     var ptr = Marshal.UnsafeAddrOfPinnedArrayElement(buffer, 0);
   7:     fontCollection = new PrivateFontCollection();
   8:     fontCollection.AddMemoryFont(ptr, buffer.Length);
   9:     return fontCollection.Families[0];
  10:   } finally {
  11:     // don't forget to unpin the array!
  12:     handle.Free();
  13:   }
  14: }

As you can see, I'm using GCHandle and UnsafeAddrOfPinnedArrayElement to get the IntPtr to the first element in the array. If you prefer to use unsafe blocks, go ahead, it's even shorter:

   1: // load font family from byte array
   2: public static unsafe FontFamily LoadFontFamilyUnsafe(byte[] buffer, out PrivateFontCollection fontCollection) {
   3:   fixed (byte* ptr = buffer) {
   4:     fontCollection = new PrivateFontCollection();
   5:     fontCollection.AddMemoryFont(new IntPtr(ptr), buffer.Length);
   6:     return fontCollection.Families[0];
   7:   }
   8: }

For convenience we provide another overload that accepts a stream:

   1: // Load font family from stream
   2: public static FontFamily LoadFontFamily(Stream stream, out PrivateFontCollection fontCollection) {
   3:   var buffer = new byte[stream.Length];
   4:   stream.Read(buffer, 0, buffer.Length);
   5:   return LoadFontFamily(buffer, out fontCollection);
   6: }

With those methods available we can load a Font for example from an embedded resource:

   1: using (Stream s = Assembly.GetExecutingAssembly().GetManifestResourceStream("Test.TheFont.ttf")) {
   2:   PrivateFontCollection fonts;
   3:   FontFamily family = LoadFontFamily(s, out fonts);
   4:   Font theFont = new Font(family, 20.0f);
   5:   //...
   6: }

Note: the documentation of AddMemoryFont have this to say:

To use the memory font, text on a control must be rendered with GDI+. Use the SetCompatibleTextRenderingDefault method, passing true, to set GDI+ rendering on the application, or on individual controls by setting the control's UseCompatibleTextRendering property to true. Some controls cannot be rendered with GDI+.

While I haven't noticed problems when using memory fonts even when compatible text rendering is deactivated (which is preferable), you might experience problems. In this case you have two options:

  • Enable compatibility mode using SetCompatibleTextRenderingDefault, which is not really desirable as the new GDI text rendering engine is superior to the GDI+ engine.
  • Don't use AddMemoryFont, but extract the font to the temp directory and load it from there using AddFontFile

Update 07/07/2008: It seems that you must not dipose the PrivateFontCollection before you're done with the fonts within it; otherwise your app my crash. I updated the methods above to return the PrivateFontCollection instance. The caller has to dispose the collection after he/she is done using the fonts.

Posted in: Snippets

Tags: ,

Subversion 1.5 released

June 28, 2008 at 9:49 AMAndre Loker

OK, it has already been some days since release, nevertheless you might not have noticed. Subversion 1.5 has been released, bringing some improvements and changes. As a consequence, I spent 30 minutes today upgrading the tools related to SVN:

Be aware that repositories and working directories that have been upgraded to the new 1.5 format are not compatible with subversion 1.4.x and below. Furthermore, the working directory upgrade occurs automatically(!) if you use a svn 1.5 on a working dir. Repository upgrades don't happen automatically, though, and must be triggered with svnadmin upgrade or simply using VisualSVN Server (right mouse click on the Repositories node, "All tasks => Upgrade Repositories format...").

Posted in: Tools

Tags: , ,


June 27, 2008 at 5:01 PMAndre Loker

HTTP methods

As you probably know, HTTP supports several methods that define the nature of the current request. The two most important ones are GET and POST. GET is the primary method to get content (so called entities) from the server such as  HTML pages, images, CSS style sheets etc. The POST method on the other hand is meant to transport entities to the server, for example login credentials or a blog comment. On the server side a POST request often results in an update of certain data (databases, session state).

Both GET and POST can return an entity as a response. For GET this is obvious - it's what the method exists for in the first place. For POST it might sound reasonable in the first place as well, but it brings a pile of problems.

A simple scenario

Imagine you fill in a sign-up form of some web based e-mail service and POST it to the server using a submit button. The server processes the new account and updates its database. Maybe it even logs you in directly. In response of the POST request the server directly shows you a view of your inbox. Here's a diagram of what happens between browser and server:


  1. The browser POSTs form data to an URL called signup.aspx
  2. The server processes the request
  3. The server responds with a status code of 200 (OK) and sends back a view of the new users inbox rendered as HTML

You leave the computer to have a coffee and when you come back 5 minutes later you refresh the page (using CTRL+R or F5 or whatever shortcut your browser uses) to see whether you already have new messages. You are a bit puzzled why this (or a similar) message box appears:


You click on OK and are even more confused as the page that appears says "This user name is already taken" instead of showing your inbox .

What has happened? Remember that the page you saw was the response of a POST request (submitting the sign up form). When you refreshed the page and confirmed to "resend the data" you actually repeated the POST request with the same form data. The server processed the "new" account and found that the user name is already in use (by yourself!), therefore it showed an error. "But wait", you say, "I just wanted the server to refresh the view of my inbox, what have I done wrong? " The answer is: nothing! The problem is that the application abused the POST response to transport an entity back to the client that should have been accessed with a GET request in the first place.

POST related issues

Here are some of the problems that occur if you abuse POST requests to return entities:

1. Refreshing the page results in a re-transmission of the POST data

This is what I described above. Hitting "refresh page" for a reponse based on a POST request will re-issue the POST request. Instead of refreshing what you see this will repeat what you did to reach the current page. This is not "refresh page" anymore, it becomes "repeat last action" - which is most likely not what the user wants. If you see a summary page after you have submitted an order in an online store, you don't want F5 to drop another order, do you?

2. POST responses are hard to bookmark

Bookmarks (or favourites etc.) normally only remember the URL of the bookmarked page (along with some user supplied meta data). Because a POST request transports data in the request body instead as query parameters in the URL like GET does, bookmarking the result of a POST will not work in most cases.

3. POST responses pollute the browser history

If the browser keeps the result of a POST request in it's history, going back to that history entry will normally result in POST data to be retransmitted. This again causes the same issues as mentioned in point 1.


"But I need POSTs to send forms to the server - how can I avoid the problems mentioned above?" you might say. Here's where the POST-Redirect-GET (PRG hereafter) pattern enters the stage.

Instead of sending entity content with the POST response after we processed the request, we return the URL of a new location which the browser should visit afterwards. Normally this new location shows the result of the POST or an updated view of some domain model.

This can be achieved by not returning a result code of 200 (success) but instead returning a status code that indicates a new location for the result, for example 303 ("See other") or 302("Found"/"Moved temporarily"), the latter of which is used most often nowadays. Together with the 30x result code a Location header is sent which contains the URL of the page to which the request is redirected. Only the headers are sent, no body is included.

If the browser sees the 30x status code, it will look for the Location header and issue a GET request to the URL mentioned there. Finally the user will see the body of that GET request in the browser.

The browser-server communication would look like this:


  1. The browser POSTs to signup.aspx
  2. The server updates some state etc.
  3. The response is 302 redirect with a Location header value of inbox.aspx
  4. The browser realizes that the response is redirected and issues a GET to inbox.aspx
  5. The server returns 200 together with the content of the resource.

What do we gain?

  • The page can be safely refreshed. Refreshing will cause another GET to inbox.aspx which won't cause any updates on the server
  • The result page can be easily bookmarked. Because the current resource is defined by the URL a bookmark to this URL is likely to be valid.
  • The browser history stays clean. Responses that have a redirect status code (such as 302) will not be put into the browser cache by most browsers. Only the location to which the response is redirecting is. Therefore signup.aspx won't be added to the history and we can safely go back and forth through the history without having to resubmit any POST data

The drawbacks of POST-Redirect-GET

While it should be clear by now that the POST-Redirect-GET pattern is the way to go in most situations, I'd like to point at the few drawbacks that come along with this pattern.

First of all, redirection from one request to another causes an extra roundtrip to the server (one for the POST request, one for the GET request it redirects to). In this context the roundtrip should be understood as all processing and transmission time that is required and fixed per request, ie. transmission delay, creation and invation of the HTTP handler, opening and closing database connections/transactions, filling ORM caches etc.

If both requests can be handled very quickly by the server this will essentially double the response time. If your roundtrip time is 200ms, using PRG will cause a minimum delay of 400ms between submitting the form and the result page being visible. This issues has to be put in perspective with reality, however. The server will need some time processing both requests, so the percentage of time needed for the roundtrips decreases with the amount of time server processing time takes. The response from the POST itself can be extremely small (few hundred bytes), because only the headers need to be transmitted.

In practice I haven't noticed a real performance problem with PRG. A slow app will stay slow, a fast one won't truly suffer from the extra roundtrip. And besides, if you replace POSTs by GETs where appropriate the effect of PRG will be even less noticeable.

The problem with ASP.NET WebForms

Now that you know about POST-Redirect-GET you are of course eager to use it (at least I hope I could convince you). But as an ASP.NET WebForms developer you will soon run into problems: ASP.NET WebForms is fundamentally based on POSTs to the server. In essence, all ASP.NET web pages are wrapped in one huge <form> element with "method" set to "POST". Whenever you click a button, you essentially POST all form fields to the server. Of course you can redirect from a Button.Click handler. If you do so, you're applying PRG. At the same time you're working quite against the WebForms philosophy, especially the ViewState (which will get lost as soon as you redirect), which will force you to rethink a lot of your application logic. And if you don't rely on all this postback behaviour inherent to ASP.NET WebForms you might as well ask why you're using WebForms in the first place.

This makes clear why a lot of developers (including me) think that WebForms are inherently "broken" (viewstate, ubiquitous postbacks and the hard-to-mock HttpContext are just a few reasons). If you share these concerns but like .NET just as I do, you might want to look at alternate .NET based web frameworks such as Castle MonoRail or ASP.NET MVC.


In situations where you use AJAX the whole PRG issue becomes a new story. AJAX responses don't appear in the history, you wouldn't want to bookmark them and refreshing a web page does not re-issue any AJAX requests (except those fired on page load). Therefore I have no problem with returning entitiest (HTML fragments, JSON, XML) from AJAX POSTs - PRG is not of much use here.


To conclude this article here's a list of some basic rules that have been useful to me:

  1. Use POST-Redirect-GET whenever you can, that is: whenever you process a POST request on the server, send a redirect to a GETtable resource as response. It's applicable in almost all cases and will make your site much more usable for the visitor
  2. Don't POST what you can GET. If you only want to retrieve a parameterised resource it might be completely suitable to use a GET request with query string parameters. Google is a good example. The start page contains a simple form with a single text field to enter the search terms. Submitting the form causes a GET to /search with the search terms passed as the query string parameter q. This can be easily done by providing method="GET" on the <form> element (or just leave out the method attribute, as GET is the default).
  3. POST requests from AJAX are allowed to return entities directly as they don't suffer from the problems like "full" POSTs.

Posted in: ASP.NET | Patterns

Tags: , , ,

Get the names of databases on a SQL Server

June 20, 2008 at 6:34 PMAndre Loker

For administrative tasks you might need the names of all databases on a given SQL Server. Luckily SQL Server comes with some neat stored procedures that help a lot, for example:

   1: EXEC sp_databases; -- get name, size and remarks
   2: EXEC sp_helpdb;    -- get name, size, owner, dbid, creation date, 
   3:                    -- status and compatibility level

Those two SPs are certainly nice to have, but they return more than you might need. Given that it is not so easy to perform a SELECT on the results here's a simple query to return the names of all (online) databases:

   1: SELECT db_name(database_id) 
   2: FROM sys.master_files
   3: WHERE state = 0 -- only fetch databases that are online
   4: GROUP BY database_id;

Admittedly I did not come up with this all by myself. I simply looked at what sp_databases does and extracted the stuff I needed :-)

Posted in: Databases | Snippets

Tags: ,