Using Gallio with NCover

March 17, 2009 at 11:39 PMAndre Loker

I’m really keen to use MbUnit 3, but I still have some obstacles with Gallio to overcome. I use NCover to check the coverage created by the unit tests in the assembly that is being tested – and only in that assembly. The whole procedure is part of an automatic build process executed with NAnt.

NAnt + NCover + MbUnit 2.4 = Happy Go Lucky!

For the combo I used before (using MbUnit 2.4) my build files contained something similar to this:

   1: <property name="unittest.cmdline" value='/ap:"${build.dir}" /rf:"${testresults.dir}" /rnf:"mbunit-${target.name}" /rt:Xml "${target.path}"'/>
   2: <property name="unittest.runner" value="${tools.dir}\MbUnit\mbunit.cons.exe"/>
   3:  
   4: <exec
   5:   program="${tools.dir}\NCover\ncover.console.exe"
   6:   workingdir="${build.dir}"
   7:   commandline='"${unittest.runner}" ${unittest.cmdline} //a "${coverage.assemblies}" //w "${build.dir}" //reg //x "${testresults.dir}\coverage-${target.name}.xml" //ea CoverageExcludeAttribute'
   8:   if="${ncover.supported}"        
   9: />

While this might look scary, it boils down to calling this:

ncover.console gallio.host.exe SomeAssembly.Tests.dll //a SomeAssembly

(plus a number of switches that I omitted for simplicity). This works perfectly and generates a nice coverage file for SomeAssembly based on the tests in SomeAssembly.Tests.dll – just what I want.

NAnt + NCover + MbUnit 2.4 Gallio = Does Not Compute

My first train of thought was: hey, with Gallio being more or less the successor of MbUnit, moving to Gallio shouldn’t be so hard, just replace MbUnit.Cons.exe with Gallio.Echo.exe, fiddle with the arguments and you’re done. So I changed the build file to do something along these lines:

ncover.console gallio.echo.exe SomeAssembly.Tests.dll //a SomeAssembly

And indeed, the tests were run and a nice shiny coverage file was generated – which however was empty. Erm? What now? I don’t know exactly what causes this behaviour, it seems that Gallio runs the tests in some separate app domain or so which prevents NCover from instrumenting it.

Second chance: let Gallio do it!

I remembered that there was a special switch to set the “runner” used by Gallio. It defaults to “IsolatedAppDomain”, but there’s also a “NCover” runner available. Someone gave me the hint to simply use this runner.

image

Maybe I only need to tell Gallio to use NCover and we’re happy? So I changed my build file to do something like:

gallio.echo.exe SomeAssembly.Tests.dll /r:NCover

And indeed, now it runs the tests and creates a non-empty coverage file. Sweet! A bit slowish, but running at least. The problem is – and you might have predicted it – that the coverage is gathered of all loaded assembly, including the assemblies of MbUnit and Gallio. So even for a trivial assembly being tested I get a coverage file of 5 MB. I couldn’t find a way to pass arguments regarding NCover. Argh! The problem is known by the way, but yet to be fixed.

Update: this problem is fixed in 3.0.6 builds. There you can pass arguments to NCover using the NCoverArguments and NCoverCoverageFile options.

At last: taming the beast

Believe me if tell you that at that point I was just annoyed. As of now all that Gallio gave me is headaches and frustration. This article was initially meant to reflect this.

Up to the point when I reconsidered the problem of the empty log file and remembered that there was a “Local” runner listed in the Gallio.Echo.exe help. Maybe this would force Gallio to run the tests in a way such that NCover can do its magic? Thus, I tried this (note the /r switch):

ncover.console gallio.echo.exe SomeAssembly.Tests.dll /r:Local //a SomeAssembly

Let’s see what we get:

  • Tests are found and executed? Check.
  • The coverage file is non-empty? Check.
  • The coverage file only contains information about “SomeAssembly”? Check.

Could it be? Finally, after several disappointments I am finally able to integrate Gallio in my automatic builds! Well, I have not integrated the Gallio reports fully in my CruiseControl.NET web dashboard, but I’ll keep that for another day. As for today, I’m happy with what I achieved.

Update:

Gallio also provides a NAnt task which you can use instead of <exec>’ing. Starting from version 3.0.6 you can also pass arguments to NCover (as described above). I tested the following task in NAnt and it worked smoothly:

   1: <gallio result-property="testrunner.exit-code" 
   2:         application-base-directory="${build.dir}"
   3:         runner-type="NCover" 
   4:         failonerror="false" 
   5:         report-name-format="gallio-${target.name}" 
   6:         report-types="xml" 
   7:         report-directory="${testresults.dir}">
   8:   <runner-property value="NCoverArguments='//w ${build.dir} //q //ea CoverageExcludeAttribute //a ${coverage.assemblies}'" />
   9:   <runner-property value="NCoverCoverageFile='${testresults.dir}\coverage-${target.name}.xml'" />
  10:   <assemblies>
  11:     <include name="${target.path}" />
  12:   </assemblies>
  13: </gallio>
  14:  
  15: <fail if="${testrunner.exit-code != '0'}" >The return code should have been 0!</fail>

Thanks to Bruno Wouters and Jeff Brown for the hint!

 

Conclusion

To condense the story into two sentences:

  1. If you want to use Gallio with NCover be sure to choose the “Local” runner by setting the /r:Local option.
  2. Don’t use the NCover runner if you need any control over NCover options. Update: or use version 3.0.6+

To the Gallio people

I hope this article isn’t utterly stupid or useless. I looked on the web but did not came across anything useful regarding NCover and Gallio. Also, I couldn’t find any hint about it in the documentation. I hope this article helps people that like me would really like to use Gallio but struggle with issues.

Update: thanks for your help!

Posted in: Tools

Tags: , , ,

Big websites gone small

March 12, 2009 at 11:24 PMAndre Loker

If you need a simple way to generate a thumbnail of a website, have a look at thumbscreator.net. This service created by Jan Welker and Klaus Bock allows you to create snapshots of websites in three different sizes by simply passing its address to a specific URL. Should be worth a bookmark I say.

 

Posted in: Tools

Tags:

Re: Health Monitoring in ASP.NET 3.5

March 12, 2009 at 11:33 AMAndre Loker

Some days ago, I read this cool article about the built in health monitoring features of ASP.NET. Funny that I never stumbled across that - I always implemented my own health monitoring.

The other day I played around with the health monitoring feature and it looks really great. As a quick reference here’s a short description of the elements of the health monitoring facility:

healthMonitoring

EventMapping

This element describes which events are to be captured. The type attribute references one of the event types derived from WebBaseEvent. All events of that type or a derived type are captured. For example, to capture all kinds of errors in the application, type would be set to System.Web.Management.WebBaseErrorEvent, for events regarding the application lifecycle (start, restart, stop etc.) one would use System.Web.Management..WebApplicationLifetimeEvent. You can further filter on a specific range of error codes (e.g. as defined in the WebEventCodes class).

The machine wide web.config has already defined numerous event mappings for you, so it’s unlikely that you need to define your own:

  • All Events, captures all events (WebBaseEvent and below)
  • Heartbeats, captures events that are automatically generated by ASP.NET in a given interval as defined by the heartbeatInterval attribute of the healthMonitoring section (WebHeartbeatEvent)
  • Application Lifetime Events, captures compilation, application startup, restart, shutdown (WebApplicationLifetimeEvent)
  • Request Processing Events, raised on each and every request (WebRequestEvent)
  • All Errors, captures all errors (WebBaseErrorEvent and below)
    Infrastructure Errors, captures all system related errors (WebErrorEvent)
  • Request Processing Errors, captures all request related errors (WebRequestErrorEvent and below)
  • All Audits, captures security related events (WebAuditEvent)
  • Failure Audits, captures failed audits (WebFailureAuditEvent)
  • Success Audits, likewise captures succeeded audits (WebSuccessAuditEvent)

The bold part is the name of the event mapping.

Provider

Providers describe how to process an event. ASP.NET comes bundled with a good selection of providers that cover most needs, for example:

  • write to the event log: EventLogWebEventProvider
  • write to the ASP.NET trace: TraceWebEventProvider
  • send an email when the event is captured: SimpleMailWebEventProvider and TemplatedMailWebEventProvider
  • write to a sql server table: SqlWebEventProvider

Again, ASP.NET has some standard providers defined:

  • EventLogProvider, writes to the event log
  • SqlWebEventProvider, writes to a database using the connection string named LocalSqlServer
  • WmiWebEventProvider, generates WMI events

As you see, you’ll have to setup a mail sending provider yourself, but this is shown in the article mentioned above.

Rule

EventMappings and Providers won’t do you any good unless you define a rule. It’s the rules that actually activate the health monitoring system: they link providers and event mappings. Use the “eventName” attribute define the set of events to be captured and the “provider” attribute to decide what to do with those events.

You can configure additional settings, like

  • how often has an event to be raised before the provider performs its action (minInstances)
  • how often may events occur before processing is stopped (maxLimit), may have the value “Infinite” or an integer
  • what is the interval within which consecutive occurrences of an event are ignored (minInterval)

Instead of defining the settings above with each and every rule you can – and should - use profiles instead (see below).

As with the other elements there are two prefab rules, both of which use the EventLogProvider to write events to the Windows event log:

  • All Errors Default, logs “All Errors” at a minInteval of 1 minute
  • Failure Audits Default, logs "Failure Audits" also at a minInterval of 1 minute

Profile

As mentioned above, profiles are used to centralize the configuration of the settings minInstances, maxLimit and minInterval. Simple define a profile, name it and use this name as the value of a rule’s “profile” element. Nothing fancy, but useful.

Two profiles are defined in the global web.config:

  • Default: minInstances = 1, maxLimit = Infinite, minInterval = 1 minute (00:01:00)
  • Critical: minInstances = 1, maxLimit = Infinite, minInterval = 0 seconds (00:00:00), that is each occurrence will be processed

BufferMode

Finally, we have the buffer modes. If you enable buffering on a provider (buffer=”true”) you also need to define how the buffering is done by referencing a specific buffer mode (bufferMode=”name of a buffer mode”). According to the documentation, buffer modes only apply to the SQLWebEventProvider, but I haven’t checked that.

The global web.config defines four buffer modes, ranging from least aggressive buffering for critical events to large buffers for non-crucial events:

  • Critical Notification
  • Notification
  • Analysis
  • Logging

For details of the configured values, look at your global web.config or at the documentation.

MSDN links:

Posted in: ASP.NET

Tags: ,

Algorithms: recursive and iterative depth first search

March 10, 2009 at 1:31 AMAndre Loker

I feel like blogging, but don’t have anything fancy to say – so why not talk about something basic as a graph searching algorithm?

The other day I needed to traverse a graph using depth first search – certainly not the most difficult thing in the world. A very basic recursive version can look like this:

   1: public void DfsRecursive(Node node) {
   2:   DoSomethingWithNode(node);
   3:   foreach(Transition t in node.Transitions){
   4:     Node destNode = t.Destination;
   5:     DfsRecursive(destNode);
   6:   }
   7: }

Assuming that a Node has a number of Transition instances that lead to the next node. BTW: I only consider trees here (i.e. graphs without loops). All snippets shown here can be used for graphs by keeping track of visited transitions and continue only for new ones (HashSet<Transition> is your friend).

My nodes did not contain the transitions as a list or array. Instead a node only had a link to the first transition, each transition had a link to the next transition – a linked list of transitions. Furthermore I needed to perform actions not only when a node was entered but also i) when the transition was followed, ii) when a transition was “undone” and iii) when a node was left. The latter two occur during the backtracking phase. Luckily with the recursive implementation, this was quite easy:

   1: public void Traverse(Node node) {
   2:   EnterNode(node);
   3:  
   4:   var t = node.FirstTransition;
   5:   while(t != null) {
   6:     var destNode = TakeTransition(t);
   7:     Traverse(destNode);
   8:     UndoTransition(t);
   9:     t = t.NextSibling;
  10:   }
  11:  
  12:   ExitNode(node);
  13: }

However, I needed to traverse very deep trees with depths of up to 500.000 nodes which made the recursive implementation unfeasible (it bails out with a StackOverflowException at graphs deeper than a couple of thousand nodes).

The “classical” iterative version of DFS looks like this:

   1: public void DfsIterative(Node node) {
   2:   var trail = new Stack<Transition>();
   3:   DoSomethingWithNode(node);
   4:   PushAllTransitionsToStack(node, trail);
   5:   while(trail.Count>0) {
   6:     Transition t = trail.Pop();
   7:     Node destination = t.Destination;
   8:     DoSomethingWithNode(destination);
   9:     PushAllTransitionsToStack(destination, trail);
  10:   }
  11: }

This is more or less what you will see when someone mentions iterative DFS. Note by the way that this version supports multiple transitions to the same destination node correctly – otherwise the algorithm could be even simpler. Also the order in which transitions of a node are visited is reversed.

The problem was that I needed to place my calls to UndoTransition and ExitNode somehow. Using the approach shown above makes that somewhat difficult. The problem is that the stack does not represent the actual path that is taken through the graph. Instead it contains transitions on the path as well as siblings that are yet to be visited. Interestingly enough I didn’t find something on the net the fitted my need – most sites mentioning iterative DFS use the approach described above. So, here’s what I came with after some time of thinking:

   1: public void Traverse(Node root) {
   2:   var trail = new Stack<Transition>();
   3:   EnterNode(root);
   4:   trail.Push(root.FirstTransition);
   5:  
   6:   while(trail.Count > 0) {
   7:     Transition current = trail.Peek();
   8:  
   9:     Node reachedNode = TakeTransition(current);
  10:     EnterNode(reachedNode);
  11:  
  12:     // try to descend ... 
  13:     Transition next = reachedNode.FirstTransition;
  14:  
  15:     // ... or backtrack
  16:     while(next == null && trail.Count > 0) {
  17:       Transition top = trail.Pop();
  18:       ExitNode(top.Destination);
  19:       UndoTransition(top);
  20:       next = top.NextSibling;
  21:     }
  22:  
  23:     if(next != null) {
  24:       trail.Push(next);
  25:     }
  26:   }
  27:   ExitNode(root);
  28: }

What’s cool about this approach:

  • You can always tell the current depth of the traversal by simply checking trail.Count
  • Trail contains the exact path to the current node, so it’s easy to dump the trail for an interesting node

In my project I used some extensions to this basic version:

  • I used it to traverse graphs (not only trees), so I added a check whether a given state has already been visited before
  • I extended it to traverse over multiple graphs at once. Useful to explore the complete state space of the parallel composition of two graphs. Maybe I’ll go into details in a future post.

Is it something new? Certainly not. Is it rocket science? No! Could it be useful for you to have the skeleton of an iterative DFS with the ability to define backtracking actions at hand next time you need it? I hope so. Have fun!

Posted in: Snippets

Tags:

Perfectionist's block

March 4, 2009 at 9:15 PMAndre Loker

So you’re reading all the blog entries and books about patterns & practices, how to design great applications and write great code and what not and you think to yourself: “gosh, with that knowledge I’m going to write the best applications EVER!”. And than you’re firing up your machine and want to start developing that greatest app and… just can’t. You simply lock up, because you are afraid to write something sub-par because you miss one of the techniques to write great applications. And at the end of the day, you’ve created nothing. Instead you hate the code you’ve written before, knowing that there is so much about it to improve.

Does this sound familiar? Even if it does not to you, it does to me – at least for my own pet projects. Sometimes I’m facing this very fear of creating sub-optimal code, an appearance I like to call “perfectionist’s block” (as in “writer’s block). By the way: with projects I’m doing for customers I’m not facing the block because of deadlines etc. that keep me going.

What to do to break through this block? I remind myself that creating nothing at all is certainly worse than something imperfect. Writing semi-perfect code is no problem if you keep in mind that you always improve it later. Backed by a covering set of unit tests (or specs for the BDD people), code can almost always be safely refactored to a better version. I think that getting things done to the best of our knowledge and intentions is the best we can do.

If I feel a perfectionist’s block coming up the next time, maybe reading this post will help me :-)

Posted in: Other

Tags: ,