Viewing entries tagged



It gets me when application frameworks tamper with core web concepts of precisely what they are trying to solve. If you have WCF services exposed through any of its different endpoints, you have to do the most ridiculous dancing to get something as simple as the HttpContext. WTF is up with that Microsoft!?!

There are like 10 different ways to access HttpContext and Request Headers, all weird in their own ways, none of them standard, and requiring the callers to add headers in different and specific ways:

  • There is HttpContext (or this.Context or HttpContext.Current): “Gets or sets the HttpContext object for the current HTTP request” This would be the obvious choice, but the WCF team needed to get COMPLICATED! To support this, you have to add extra magic and attributes to your service contracts (read here)
  •  Then we get fancy with something that is not quite the HttpContext the WEB knows and loves, but some new BS called OperationContext (OperationContext .Current). MSDN explains: “Provides access to the execution context of a service method”... but off-course!
  • Also HttpContextBase class according to MSDN “serves as the base class for classes that contain HTTP-specific information about an individual HTTP request”. So, you’d only think that HttpContextBase is the base class of HttpContext right? WRONG!

Hmmm, at this point you think this might be a brain teaser. There may be another 2-3 ways to access data from a similar concepts. If inspecting the HttpContext on the server side is a nightmare, managing headers and contextual http request elements on the client is even worse if your client is using the generated WCF contracts from VS. Here you are either setting something called ‘OutgoingMessageHeaders’ on an http request (like there is something that can be ‘incoming’ during a request), or you are implementing a custom IClientMessageInspector and altering the request before it is sent to the server: what is this the Police Academy (Inspector, pffff)? Why do I need to inspect a message I built? Or why am I forced to do this kind of crap?

This is so frustrating I cannot cope with the unnecessary layers of engineering and noise the WCF team threw over such a simple concept. I have nothing against new and different ways to solve problems, but please don’t call it the same as something that already exists and it’s well defined by the HTTP protocol specification (RFC 2616). PLEASE. DON'T.

I’ll try working around it with Rick Strahl’s post. If I keep having problems, I’ll move out to a different framework, implement my IHttpHandler, or downplay WCF’s capabilities.



Killing all Cassini(s) with a .bat file

When working with Visual Studio and using the Web Development Server (aka CASSINI), there is often the repetitive task of killing the Cassini processes before running again. This happens especially if you are working with in-proc caching on IIS, or simply have many web applications in your solution.

What I do is run a simple bat file that automatically kills all the Cassini Web Development Server instances, that way if I need to make sure I'm using uncached data I just run it and keep on doing what I was doing before, instead of manually scanning my taskbar.

So, crack open notepad, write the following and save it as "CassiniKiller.bat"

[sourcecode language="cpp"] taskkill /F /IM "WebDev.WebServer40.exe" taskkill /F /IM "ProcessInvocation86.exe" taskkill /F /IM "iexplore.exe" [/sourcecode] Now you won't have to play cat and mouse with the Cassinis no more; now you just KILL'EM ALL :-)


1 Comment

Polymorphism... back to school

Almost anyone with some notion about programming or object-oriented techniques will be able to describe what polymorphism is and how to use it; I've discovered recently that very few can tell you how it actually works. While this may not be relevant to the daily work of many software developers, the consequences of not knowing and thus its misuses, affect everyone. We all know how to describe a car and even how to use it, but only mechanics know how to fix it, because they know how IT WORKS.



Polymorphism: What it is

Polymorphism is a characteristic or feature of programming languages. Programming languages either support it or they don’t. Since programming languages fall under the umbrella of sometimes substantially different paradigms, in this article I’m going to concentrate in polymorphism within the scope of Object Oriented Programming.

In OOP polymorphism is considered one of the basic principles and a very distinctive one. Most of the object-oriented languages have polymorphism among its many features. In a nutshell, polymorphism is best seen when the caller of an object with polymorphic implementation is not aware of the exact type the object is. Polymorphism interacts very closely with other features like inheritance and abstraction.


Polymorphism: What it is NOT

Contrary to an overwhelming number of articles found online, things like overloading polymorphism or parametric polymorphism are NOT object-oriented expressions of polymorphism, they are applicable to functional (declarative) programming languages. Signature-based polymorphism, overloading polymorphism, parasitic polymorphism and polymorphism in closures are meaningful on functional languages only (or at best, multi-paradigm languages with functional expressions and/or duck-typing languages, like Python or JavaScript).

Polymorphism is not boxing or unboxing, and is not the same as inheritance; instead and usually polymorphic manifestations occur are a consequence of inheritance, sub-typing and boxing.


Polymorphism: How it works

As its name suggests, polymorphism is the ability of an object to have more than one form, or more properly to look as it is of some other type. The most common manifestation can be seen with sub-typing, let’s see it through an example:

[sourcecode language="csharp"] public class Fruit { public virtual string Description() { return "This is a fruit"; } }

public class FreshApple: Fruit { public override string Description() { return "Fresh Apple"; } }

public class RottenBanana:Fruit { public override string Description() { return "Banana after 10 days"; } } [/sourcecode]

Now we can do something like this:

[sourcecode language="csharp"] Fruit[] fruits = { new FreshApple(), new RottenBanana(), new Fruit() }; foreach (var fruit in fruits) { Console.WriteLine("Fruit -> {0}", fruit.Description()); } Console.ReadLine(); [/sourcecode]

... and that would have the following output:

That’s all well and good and almost anyone will get this far. The question is HOW does this happens, HOW the runtime realizes that the method is must call is the one in the child classes instead of that one in the parent class? How deep this rabbit hole goes? There is no magic element here and you’ll see exactly HOW this occurs and WHY it is important to know about it.

These mappings of function calls to their implementations happen through a mechanism called dispatch tables or virtual tables (vTables for short). Virtual Tables is a feature of programming languages (again, they either support it or they don’t). Virtual Tables are present mostly on languages that support dynamic dispatch (C++, C#, Objective C, Java), meaning they can bound to function pointers or objects at run-time, and they can abstract from the actual implementation of the method/function/object until the very moment they are going to use it.

The vTable itself is nothing more than a data structure, no different from a Stack or a Hashtable for their purpose; it just happen to be the one used for the dynamic dispatch feature in programming languages. Some languages offer a different structure called the Binary Tree Dispatch as an alternative to vTables. As any data structure, they both have pros and cons when it comes to measure performance and cost. The point is that whether it is a vTable or bTree Dispatch, dynamic dispatching is bound using a data structure that is carried over with objects and functions to support this feature. Yes, I said “carried over”.

vTables are a hidden variable of objects. In .NET for example, all classes and structs inherit from Object and Object already has virtual functions like "ToString()" and "GetHashCode()", so every single object you create will have a hidden vTable private structure that is always initialized behind the scenes in the constructor call of every object. The purpose of these vTables being created all over the place is exactly to map the correct functions for polymorphic objects and functions. You can use Reflector to peek over an object at runtime (IL) and you'll find its vTable in the first memory location of each object's memory segment. The IL functions call and callvirt (polymorphism, yay!) will be used to call non-virtual and virtual methods respectively. There is simply no way of accessing an object's vTable directly from C#; this was done on purpose to minimize the security implications of it among other reasons. In C++, hmm...

[sourcecode language="cpp"] long *vptr = (long *) &obj; long *vtable = (long *)*vptr; [/sourcecode]

So the vTable of each object will hold a reference to each one of its virtual functions. Think of it, structurally, as a list of function pointers sorted in the same order they are declared. This serves its purpose on inheritance and polymorphism when a child object gets initialized regardless being assigned to a variable of type parent, the vTable of that object will be the one corresponding the actual inheritor. When virtual functions get called in the variable it will find the correct function in the child objects (actual object type) thanks to the function pointers on their respective vTables.

Virtual tables, inheritance and polymorphism in general have been criticized for their overhead in program execution. That is why one of the object-oriented programming principles is "Favor Object Composition Over Inheritance". Non-virtual functions never require the overhead of vTables making them naturally faster than virtual functions. Applications and systems with a deep inheritance architecture tend to spend a considerably large amount of their execution time just trying to figure out the root path of their virtual functions. This can become quite a performance tax if used without care, specially in older CPU architectures. Most modern compilers will have a trick or two under their sleeves to attempt to resolve virtual function calls at compile time without the need of vTables, but dynamic binding will always need of these creatures to resolve their destination paths.

And now you know how polymorphism works. Happy coding!

1 Comment

1 Comment

Garbage Collection – Pt 3: Generations

The .NET Garbage Collector is a generational collector, meaning it collects objects in different generations. The main purpose of generations is performance. Simply put, having the GC perform a full garbage collection on every object reference tree on every collection is way too expensive. The idea behind generational collections is that having a segmented collection, the GC will visit more often those objects with a short lifespan, than those with a long lifespan. Given the fact that most objects are short-lived, such as local variables and parameters (they'll go out of scope relatively faster), we can collect memory a lot faster and more efficiently if we keep collecting these kind of objects with a shorter frequency than objects with a long lifespan.

So how does the GC determine the lifespan of an object? Does it even do that? How does GC manage its generations? What exactly is a generation?

GC Roots and Generations

Generations are logical views of the Garbage Collector Heap (read Part2). The GC has 3 generations: gen-0, gen-1 and gen-2. Gen-0 and gen-1 are known as ephemeral generations because they store small objects that have a short lifespan. GC generations also have precedence, so gen-0 is said to be a younger generation than gen-1, and gen-1 is a younger generation than gen-2. When objects in a generation are collected, all younger generations are also collected:

  • Collect (gen-0) = Collect (gen-0)
  • Collect (gen-1) = Collect (gen-0) + Collect (gen-1)
  • Collect (gen-2) = Collect (gen-1) + Collect (gen-2)

Gen-0 and gen-1 collections are very fast since the heap segment is relatively small while gen-2 collections can be relatively slow. Gen-2 collection is also referred to as Full Garbage Collection because the whole heap is collected. When it’s time for gen-2 collection, the GC stops the program execution and finds all the roots into the GC heap. Departing from each root the GC will visit every single object and also track down every pointer and references to other objects contained in every one of the visited objects and marking all them as it moves through the heap. At the end of the process, the GC has found all reachable objects in the system, so everything else can be collected because it is unreachable. The golden rules GC collections never break are:

  1. GC collects all objects that are not roots.
  2. GC collects all objects that are not reachable from GC roots.

In Garbage Collection Part 2, I briefly talked about GC roots but not with much depth. There are many different categories or types of GC roots, but the more common and significant ones are static variables, global variables and objects living in the Stack that are pointing to the Heap. Here is kind of a messy idea of how things look when it comes to GC roots:

Now, in reality the heap doesn’t look that bad nor it’s organized in such a hectic manner. The picture is meant to illustrate the GC roots pointers to the heap. Later on this article I’ll cover the innards of the heap in more details.

Every time GC collects, objects that survive the generational collection just completed (because they can be reached from at least one of the GC roots) gets promoted to an older (higher) generation. This promotion mechanism ensures on each GC cycle that the younger the generation, the shorter the lifetime of the objects in it. The GC generational object promotion works is rather simple and works as follows:

  1. Objects that survive gen-0 collection will be considered gen-1 objects and GC will attempt collecting them when it runs gen-1 collection.
  2. Objects that survive gen-1 collection will be considered gen-2 objects and GC will attempt collecting them when it runs gen-2 collection.
  3. Objects that survive gen-2 collection will be still considered gen-2 objects until they are collected.
  4. Only GC can promote objects between generations. Developers are only allowed to allocate objects to gen-0 and the GC will take care of the rest.

GC Heap Structure

As mentioned in a previous article, the managed heap is logically divided into 2 heaps, the Small Objects Heap (SOH) and the Large Object Heap (LOH), where the memory is allocated in segments. The next figure is a more accurate view of the managed heap (contrasting the previous figure)

Because the objects collected during gen-0 and gen-1 have a short lifespan, these 2 generations are known as the ephemeral generations. All objects collected by gen-0 and gen-1 are also allocated in the ephemeral memory segment. The ephemeral segment is always the newest segment acquired by the GC. Every time the GC requests the OS for more memory and a new segment is allocated, the new segment becomes the ephemeral segment and the old ephemeral segment gets designated for gen-2 objects.

Facts about GC generations

  1. The GC has 3 generations gen-0, gen-1 and gen-2.
  2. Gen-0 and gen1 are known as ephemeral collections.
  3. Gen-2 collections are known as Full Garbage Collection.
  4. Objects that survive collections get promoted to higher generations.
  5. Gen-2 collections are a lot more expensive and happen less often than gen-0 and gen1 collections.
  6. The managed heap is logically divided into the SOH and the LOH.
  7. Memory is allocated in segments on the manged heap.
  8. Always the newest segment allocated is the ephemeral segment.

Side reading: Check out this great article by Maoni Stephens titled Large Object Heap Uncovered, where she talks about many of the topics in this article.

1 Comment


Garbage Collection - Pt 2: .NET Object Life-cycle

This article is a continuation of Garbage Collection - Pt 1: Introduction. Like everything else in this world, objects in object-oriented programming have a lifetime from when they are born (created) to when they die (destroyed or destructed). In the .Net Framework objects  have the following life cycle:

  1. Object creation (new keyword, dynamic instantiation or activation, etc).
  2. The first time around, all static object initializers are called.
  3. The runtime allocates memory for the object in the managed heap.
  4. The object is used by the application. Members (Properties/Methods/Fields) of the object type are called and used to change the object.
  5. If the developer decided to add disposing conditions, then the object is disposed. This happens by coding a using statement or manually calling to the object’s Dispose method for IDisposable objects.
  6. If the object has a finalizer, the GC puts the object in the finalization queue.
  7. If the object was put in the finalization queue, the GC will, at an arbitraty moment in time, call the object’s finalizer.
  8. Object is destroyed by marking its memory section in the heap segment as a Free Object.

The CLR Garbage Collector intervenes in the most critical steps in the object lifecycle; GC is almost completely managing steps 3, 6, 7 and 8 in the life of an object. This is one of the main reasons I believe the GC is one of the most critical parts of the .NET Framework that is often overlooked and not fully understood.

These are the key concepts and agents that participate in the life of a .NET object that should be entirely comprehended:

The Managed Heap and the Stack

The .Net Framework has a managed heap (aka GC Heap), which is nothing more than an advance data structure that the GC uses when allocating reference type objects (mostly). Each process has one heap that is shared between all .NET application domains running within the process.

Similarly, the thread Stack is another advance data structure that tracks the code execution of a thread. There is on stack per thread per process.

They are often compared to a messy pile of laundry (GC heap) and an organized shoe rack (Stack). Ultimately they are both used to store objects with some distinctions. There are 4 things that we store on the Stack and Heap as the code executes:

  1. Value Types: Go on heap or stack, depending on where they were declared.
    • If a Value Type is declared outside of a method, but inside a Reference Type it will be placed within the Reference Type on the Heap.
    • If a Value Type is boxed it’ll be placed in the Heap.
    • If the Value Type is within an iterator block, it’ll be placed in the Heap.
    • Else goes in the Stack.
  2. Reference Types: Always go on the Heap.
  3. Pointers: Go on heap or stack, depending on where they were declared.
  4. Instructions: Always go on the Stack.

Facts about the Heap

  1. The managed heap is a fast data structure that allows for fast access to any object in the heap regardless of when was inserted or where it was inserted within the heap.
  2. The managed heap is divided into heap segments. Heap segments are physical chunks of memory the GC reserves from the OS on behalf of CLR managed code.
  3. The 2 main segments of the heap are:
    1. The Small Objects Heap (SOH) segment: where small objects (less than 85 Kb) are stored. The SOH is also known as the ephemeral segment.
    2. The Large Objects Heap (LOH) segment: where large objects (more than 85 Kb) are stored.
  4. All reference types are stored on the heap.

Facts about the Stack

  1. The Stack is a block of pre-allocated memory (usually 1MB) that never changes in size.
  2. The Stack is an organized storage structure where grabbing the top item is O(1).
  3. Objects stored in the Stack are considered ROOTS of the GC.
  4. Objects stored in the Stack have a lifetime determined by their scope.
  5. Objects stored in the Stack are NEVER collected by the GC. The storage memory used by the Stack gets deallocated by popping items from the Stack, hence its scoping significance.

Now you start to see how it all comes together. The last magic rule that glues them in harmony is that the objects the garbage collector collects are those that:

  1. Are NOT GC roots.
  2. Are not accessible by references from GC roots.

Object Finalizers

Finalizers are special methods that are automatically called by the GC before the object is collected. They can only be called by the GC provided they exist. The .NET ultimate base class Object has a Finalize method that can be overridden by child objects (anyone basically). The purpose of finalizers is to ensure all unmanaged resources the object may be using are properly cleaned up prior to the end of the object lifetime.

If a type has an implemented (overridden) finalizer at the time of collection, the GC will first put the object in the finalization queue, then call the finalizer and then the object is destroyed.

Finalizers are not directly supported by C# compilers; instead you should use destructors using the ~ character, like so:

The CLR implicitly translates C# destructors to create Finalize calls.

Facts about Finalizers

  1. Finalizers execution during garbage collection is not guaranteed at any specific time, unless calling a Close or a Dispose method.
  2. Finalizers of 2 different objects are not guaranteed to run in any specific order, even if they are part of object composition.
  3. Finalizers (or destructors) should ONLY be implemented when your object type directly handles unmanaged resources. Only unmanaged resources should be freed up inside the finalizer.
  4. Finalizers must ALWAYS call the base.Finalize() method of the parent (this is not true for C# destructors, that do this automatically for you)
  5. C# does not support finalizers directly, but it supports them through C# destructors.
  6. Finalizers run in arbitrary threads selected or created by the GC.
  7. The CLR will only continue to Finalize objects in the finalization queue <=> the number of finalizable objects in the queue continues to decrease.
  8. All finalizer calls might not run to completion if one of the finalizers blocks indefinetly (in the code) or the process in which the app domain is running terminates without giving the CLR chance to clean up.

More at MSDN.

Disposable Objects

Objects in .NET can implement the IDisposable interface, whose only contract is to implement the Dispose method. Disposable objects are only created explicitly and called explicitly by the developer, and its main goal is to dispose managed and unmanaged resources on demand (when the developer wants to). The GC never calls the Dispose method on an object automatically. The Dispose() method on a disposable object can only get executed in one  two scenarios:

  1. The developer invokes a direct call to dispose the object.
  2. The developer created the object in the context of the using statement

Given the following disposable object type:

The next alternatives are equivalent:


Facts about Disposable Objects

  1. An object is a disposable object if implements the IDisposable interface.
  2. The only method of the IDisposable interface is the Dispose() method.
  3. On the Dispose method the developer can free up both managed and unmanaged resources.
  4. Disposable objects can be used by calling directly the object.Dispose() method or using the object within a using statement
  5. The Dispose() method on disposable objects will never get called automatically by the GC.

More at MSDN.


Ok, so now that we know the key players, let’s look again at the life of a .NET object. This time around we can understand a bit better what’s going on under the carpet.

  1. Object creation (new keyword, dynamic instantiation or activation, etc).
  2. The first time around, all static object initializers are called.
  3. The runtime allocates memory for the object in the managed heap.
  4. The object is used by the application. Members (Properties/Methods/Fields) of the object type are called and used to change the object.
  5. If the developer decided to add disposing conditions, then the object is disposed. This happens by coding a using statement or manually calling to the object’s Dispose method for IDisposable objects.
  6. If the object has a finalizer, the GC puts the object in the finalization queue.
  7. If the object was put in the finalization queue, the GC will, at an arbitraty moment in time, call the object’s finalizer.
  8. Object is destroyed by marking its memory section in the heap segment as a Free Object.


PS:   A good friend has pointed out in the comments a good article by Eric Lippert titled "The Truth About Value Types". Go check it out!



Garbage Collection - Pt 1: Introduction

Garbage collection is a practice in software engineering and computer science that aims at the automation of memory management in a software program. Its origins date all the way back to 1958 and the Lisp programming language implementation (“Recursive Functions of Symbolic Expressions and their Computation by Machine” by John McCarthy), the first to carry such mechanism. The basic problem it tries to solve is that of re-claiming unused memory in a computer running a program. Software programs rely on memory as the main storage agent for variables, network connections and all other data needed by the program to run; this memory needs to be claimed by the software application in order to be used. But memory in a computer is not infinite, thus we need a way to mark the pieces of memory we are not using anymore as “free” again, so other programs can use them after me. There are 2 main ways to do this, a manual resource cleanup, or an automatic resource cleanup. They both have advantages and disadvantages, but this article will focus on the later since it’s the one represented by the Garbage Collectors.

There are plenty of great articles and papers about garbage collection theory. Starting in this article and with other entries following up, I’ll talk about the garbage collection concepts applied to the .NET Garbage Collector specifically and will cover the following areas:

  1. What is the .NET Garbage Collector? (below in this page)
  2. Object’s lifecycle and the GC. Finalizers and diposable objects  (read Part 2)
  3. Generational Collection (read Part 3)
  4. Garbage collection performance implications (article coming soon)
  5. Best practices (article coming soon)

What is the .NET Garbage Collector?

The .NET Framework Garbage Collector (GC from now on) is one of the least understood areas of the framework, while being one of the most extremely sensitive and important parts of it. In a nutshell, the GC is an automatic memory management service that takes care of the resource cleanup for all managed objects in the managed heap.

It takes away from the developer the micro-management of resources required in C++, where you needed to manually delete your variables to free up memory. It is important to note that GC is NOT a language feature, but a framework feature. The Garbage Collector is the VIP superstar in the .NET CLR, with influence to all .NET languages.

I’m convinced those of you that worked before with C or C++ cannot count how many times you forgot to free memory when it is no longer needed or tried to use memory after you've already disposed it. These tedious and repetitive tasks are the cause of the worst bugs any developer can be faced with, since their consequences are typically unpredictable. They are the cause of resource leaks (memory leaks) and object corruption (destabilization), making your application and system perform in erratic ways at random times.

Some garbage collector facts:

  1. It empowers developers to write applications with less worries about having to free memory manually.
  2. The .NET Garbage Collector is a generational collector with 3 generations (article coming soon).
  3. GC reserves memory in segments. The 2 main GC segments are dedicated to maintain 2 heaps:
    • The Small Objects Heap (SOH), where small objects are stored. The first segment of the small object heap is known as the ephemeral segment and it is where Gen 0 and Gen 1 are maintained.
    • The Large Object Heap (LOH), where Gen 2 and large objects are maintained.
  4. When GC is triggered it reclaims objects that are no longer being used, clears their memory, and keeps the memory available for future allocations.
  5. Unmanaged resources are not maintained by the GC, instead they use the traditional Destructors, Dispose and Finalize methods.
  6. The GC also compresses the memory that is in use to reduce the working space needed to maintain the heap
  7. The GC is triggered when:
    • The system is running low on memory.
    • The managed heap allocation threshold is about to be reached.
    • The programmer directly calls the GC.Collect() method in the code. This is not recommended and should be avoided.
  8. The GC executes 1 thread per logical processor and can operate in two modes: Server and Workstation.
    • Server Garbage Collection: For server applications in need of high throughput and scalability.
      • Server mode can manage multiple heaps and run many GC processes in different threads to maximize the physical hardware capabilities of servers.
      • Server GC also supports collections notifications that allows a server farm infrastructure where the router can re-direct work to a different server when it’s notified that the GC is about to perform collections, and then resume in the main server when GC finishes.
      • The downside of this mode is that all managed code is paused while the GC collection is running.
    • Workstation Garbage Collection: For client applications. This is the default mode of the CLR and the GC always runs in the workstation mode unless otherwise specified.
      • On versions of the framework prior to 4.0, GC used 2 methods of workstation collection: Concurrent and Non-Concurrent where concurrent GC collections allowed other managed threads to keep running and executing code during the garbage collection.
      • Starting in 4.0 the concurrent collection method was replaced (upgraded?) by a Background Collection method. The background collection method has some exiting performance implications (good ones) by adding support to run generations of the GC (including gen2) in parallel, something not supported by the concurrent method (article coming soon).

Here is an interview made in 2007 to Patrick Dussud, the architect of the .NET Garbage Collector: Channel 9 Video

Next, I'll cover up the .NET object lifecycle and the role of the GC in the lifecycle of an object.


1 Comment

Installing and using FxCop

This article is supporting the article “Definition of DONE” with regards of code analysis and best practices. FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework CLR) and reports information about whether the assemblies are abiding by good design guidelines and best practices. Things like architectural design, localization, performance, and security improvements are among the many things the tool will check automatically for you and give you a nice detailed report about its findings. Many of the issues are related to violations of the programming and Microsoft guidelines for writing robust and easily maintainable code using the .NET Framework.

On the home page of the tool says that "FxCop is intended for class library developers". Wait what? Class Library Developers? WTF, whatever...the fact is that the tool is good for any type of managed library, including service libraries, winForms and WPF projects. If it looks like a DLL, smells like a DLL, and it has extension "*.dll" or "*.exe" => FxCop will screen the hell out of it.

I also find FxCop a very useful learning tool for developers who are new to the .NET Framework or who are unfamiliar with the .NET Framework Design Guidelines. There is plenty of documentation online (MSDN) about the Design Guidelines and the different rule sets of best practices Microsoft recommends.

Think of FxCop as a bunch of Unit Tests that examine how your libraries conform to a bunch of best practices

FxCop is one of the tools I use to follow my Scrum definition of "Done" when writing code in .NET. But I also mentioned(wrote) about ReSharper Code Analysis features and how great they are and how we use them too. So the singular question becomes: WHAT IS THE DIFFERENCE BETWEEN FxCop and ReSharper Code Analysis?

The answer is simple:

  • ReSharper Code Analysis analyses your .NET language (C#, VB.NET) source code.
  • FxCop analyses the binaries produced by your source code. FxCop will look for the Common Intermediate Language (CIL) generated by the compiler of your .NET language.

In a sense, FxCop analysis should be done after ReSharper analysis, and will trap everything R# missed. FxCop was initially developed as an internal Microsoft Solution for optimization of new software being produced in house, and ultimately it made is way to the public.

Installing FX Cop

  1. Verify whether or not you already have the Windows SDK 7.1. If you already have the folder "C:Program FilesMicrosoft SDKsWindowsv7.1" on your FS, that means you have it; otherwise you need to install it. If you have it, skip the next steps.
  2. Download the Microsoft Windows SDK for Windows 7 and .NET Framework 4. You can download the web installer from HERE or search in Google for "windows sdk windows 7 web installer .net 4.0". Make sure you are downloading the one for .NET 4.0 and no other one.
  3. Install the SDK with the following settings
  4. Install FX Cop 10 by going to "C:ProgramFilesMicrosoftSDKsWindowsv7.1BinFxCop" and running the setup file.

Using FX Cop

There are 2 ways of using FxCop: standalone or within Visual Studio.

Using FxCop in standalone could not be simpler. It works similar to VS in the sense that you create projects where each project is nothing else than a collection of DLLs that are analyzed together (called targets in FxCop... why not, Microsoft).

To use FxCop as a Visual Studio tool:

  • Open VS and go to Tools->External Tools
  • Add a new tool and call it FxCop with the command line tool pointing to "C:Program Files (x86)Microsoft Fxcop 10.0FxCopCmd.exe"

Now without leaving VS you can either run it as a CMD line tool by going to Tools->FxCop OR you can configure your projects to enable code analysis when you build your project. Doing it the second way will allow you to get the errors and warnings on the same build output window you get compilation errors.

Wrap Up

FxCop is a very solid tool to enforce good practices on your projects and assemblies, and it offers a wide array of configurable features not on the scope of this article. Ultimately you can write your custom rules for FxCop and enforce them as part of your project code analysis. Visual Studio integration is also a great way to maintain your code as you write it.

For more resources on FxCop, check out the FxCop Official Blog.

1 Comment


Configuring ReSharper Code Analysis

This article is supporting the article "Definition of DONE" with regards of code cleanup.

Having your code CLEAN is not only elegant, but also a good practice and habit to have specially if you work in a team environment. Every developer have their own styles, likes and dislikes when it comes to code formatting, ordering, grouping regions, etc; precisely because of this, is always good to get the team together, crack-open some Red Bulls and come up with your code standards and practices (and while you are at it, put them up on the Wiki). You can then use automation tools for your IDE to speed up the process and make sure everybody is generating and organizing their code files using the same rules. In my case, at work, we are living in the .NET world and ReSharper 4 or later in Visual Studio does the trick perfectly for us.

ReSharper has a feature called "Code Cleanup" that accomplishes exactly that: it automates the suggestions from the "Code Analysis" and applies them to code files. You can remove code redundancies, reorder type members and remove redundant using directives among many other things. This is not the only one of the many useful things ReSharper offers to .NET developers, but is certainly one I use the quite often. Let's see how it works.

At the time of this article I'm using VS 2010 and ReSharper 5.0, so all images are representative only of such software and version.

Make it perfect in one machine

The first thing is to configure all the settings in ReSharper on one machine and then share your settings. There is plenty to go through and each team may have different requirements. For example, by default ReSharper has the inspection severity on the unit "Redundant 'this.' qualifier" to "Show as warning" and in my team WE DO LIKE to use the 'this.' qualifier regularly, so we changed the R# setting to make "Show as hint". Things of this magnitude does not impact performance in code execution, the compiler simply ignores it; and they are more of a "style" type of inspection you can always tailor to your specific needs.

Once you have your Dev Box all configured, simply export your settings so the rest of the team can use them.

You can download the style we are currently using HERE.

That is the official way of sharing things using ReSharper, BUT... there is a small problem: that only shares the code inspection rules of the ReSharper settings and not other[important] things like the "Code Cleanup" settings. Breathe with ease ninja, there are solutions.

  1. If you have ReSharper 5.1 => you can use Resharper Setting Manager. This is an open source tool (CodePlex hosted) that allows you share ALL of your R# settings very easily. Officially you need R# 5.1 for this tool; I've tried it with R# 5.0 and have had a success rate of 0.5 (there is some black magic going on there that makes it work only half of the time with R# 5.0).
  2. Go old school and just copy the settings straight from the application data on your installation folder. Just navigate to  %appdata%JetBrainsReSharperv5.0vs10.0 and copy the files UserSettings.xml and Workspace.xml to your repository (or a share folder, or email, or... ). You can then get those settings from other dev machines and put them in their respective places (close VS first).

Now when you open VS on the other dev machines you'll find the same configuration and code cleanup settings as the original box. Sweet!

Using Code Cleanup

When using VS with a source code file open, the code inspection engine will always notify you of how well (clean) your code is doing with regards to your R# settings. The keyboard shortcut to run the code clean up is Ctrl+E+C or Ctrl+K+C depending on the shortcut schema layout you are using. After you run it, things will be looking better for you ;)

  • Red light means there is a BREAKING change on the code, and will probably not compile.
  • Orange light means the code doesn't have any breaking changes, but it can be improved. You'll get some warnings and some suggestions.
  • Green means you are good to go buddy. Ready for commit.

Part of our Definition of DONE is to get a green light on your source files. No exceptions.

You can find some more useful features of ReSharper like solution-wide analysis HERE.



Rx for .NET... make it the standard!

Microsoft DevLabs its been cooking some very very cool extensions called the Reactive Extensions (Rx for short) to make the asynchronous programming model more simplistic by using the observer design pattern more efficiently through code.

Since the beginning the observer pattern it's been supported by the framework using traditional delegates and event handlers. Previous to the CLR 4.0, the was no IObserver or IObservable interfaces as part of the .NET Framework Class Library (FCL) and it was certanly one of the MOST notable missing legs of it. This was looooong overdue; almost every other main stream language provides classes to implement the pattern and now we can include the .NET languages as well.

But let's jump back into Rx. These guys took la crème de la crème, some duality concepts, object and function composition, the observer pattern and the power of LINQ and created one set of AWESOME extensions. In their own words:

Rx is a library for composing

asynchronous and event-based programs

using observable collections.

In the synchronous world communication between processes does not have any buffer, and processes wait until the data between them has been completed computed and transferred; this means that you could have a sluggish DB slowing the entire process down or any link on the 2-way communication for that matter (network card, OS, firewall, internet connection, web server, db server, etc).  In the asynchronous programming world instead of waiting for things to be finished, your code stands from the point of view of a subscriber. You are waiting for somebody to notify you there is something there to be taken immediately, and then you can take action on the subject. This is very convenient because you are never blocking any calling thread and your programs (specially those with a UI thread) are very responsive never die waiting for a response or consuming CPU cycles. There are many tools, namespaces and helper classes .NET provides out of the box to support async programming such as the ThreadPool, BackgroundWorker, Tasks, etc. What is the problem then? What's the reason for Rx existence?

The basic problem the Rx team is trying to solve is that was of the lack of clarity and simplicity that asynchronous model has attached to it. Working today in an async world is way too complicated, you have to create a bunch of "support garbage code" (as I call it) just to perform the simple task of waiting for the async call to return so you can do something with the result. It resolved the issues with .NET events, such as their lack of composition, resource maintance (you have to do -= after you finish), the event args always acts as a unsafe container for the real data source triggering the event, and lastly you cannot pass around in an object an event and manage it elsewhere. Events are not first-class .NET citizens, they do not share the same benefits are regular reference objects, you cannot assign it in a field or put it in an array, yadayada.

A simple example is suppose you have a combobox and want auto-complete behaviour in the combobox, but your word dictionary is on a remote web service and you want the user to see the first 20 matches in the dictionary filtering as they type (like any other mayor browser's search box today). Let's see what we have:

On the server

[sourcecode language="css"] [WebMethod Description="Returns the top 20 words in the local dictionary that matches the search string"] public List<string> FindTop20Dict(string searchString, string matchPattern) {...} [/sourcecode]

Now on the client.

[sourcecode language="csharp"] /*Then on the client we have to implement the TextChanged event and then 2 timers with their ticks (to give some room between keypress and keypress, say 0.5 seconds) and then we have to make sure we cancel the previous call if a new keypress is made. Manually. In a comparison loop using the timers :| Also you have to check if the text actually changed, because if you do Shift+Left and type the same selected char, the event will fire even when the text DID NOT CHANGED... ufff a lot of things to consider here and then we have to assign the result to the combobox items. All this while we had a thread locked waiting for the result from the server. You get the idea of how hard it is and how error prone*/ [/sourcecode]

I did not write the actual code because it'll take quite a bit of chunk of the post, but what I can do is to write the code that'll accomplish the same thing in the client using Rx extensions.

[sourcecode language="csharp"] //Grab input from textchanged var ts = Observable.FromEvent<EventArgs>(txt, "TextChanged"); var input = (from e in ts select ((TextBox)e.sender).Text) .DistinctUntilChanged() .Throttle(TimeSpan.FromSeconds(0.5)); // //Define the syntax of your observer var svc = new MyWebService(); var match = Observable.FromAsyncPattern<string, string, List<string>> (svc.BeginFindTop20Dict, svc.EndFindTop20Dict); var lookup = new Func<string, IObservable<List<string>>>(word => match(word, "prefix")); var result = from term in input from words in lookup(term) select words // //Subscribe and wait will the magic happens using (result.Subscribe(words => { combo.Items.Clear(); combo.Items.AddRange(result.ToArray()) ; } [/sourcecode]

As you can see using Rx it is possible to read what is going on in plain English, it is more efficient (since it is optimized and using best practices) and it'll make your code more elegant and legible when coding in the async world.

If you want to find out more about Reactive Extensions (Rx) check out the links:

Rx channel in Channel 9 at:

Rx DevLabs page:

Rx Team Blog:

As of today 11/17/2010 you can download the Rx extensions from DevLabs and play and create beautiful code with this extensions. Give them a try and let's just hope they are include in .NET 4.5 as the standard way of working with the asynchronous programming model.



On Hiring: One in the bush... too much to ask?

Today almost any tech enthusiast can say "I know how to program in C#" or "I know how to create awesome websites using RoR". With the evolution and abstraction of programming languages, it is true the knowledge required to make a program writing code on your own has become increasingly easy. My aunt, a 58 year-old catholic woman with 5 children and 8 grand-children just made her own website using nothing else than Dreamweaver. The point I want to make is "with popularity also comes mediocrity". There are a lot of self-titled software engineers in the job market that cannot tell you what a pointer is, nor can explain the basis or recursion. Many of these people make it their goal to memorize and learn a specific IDE, framework or language and then they themselves software engineers. Hey, when I was a kid, I built a tree house with my friends and... IT NEVER CRACKED... I'm a genius! I'm going to apply for an architect position to build the next Trump Tower!

As I'm writing this post, I'm going thru the process at work of interviewing some candidates for 2 openings, one as a Junior Software Developer and another as a Software Engineer with substantial experience and hands on project management. I'm living a nightmare. This is the second time I've had to hunt for people to add to the team, and I'm surprised on how many applicants are out there applying for positions they really cannot fill. After almost 10 days reading resumes, making phone calls and talking to many applicants; I've only been able to handpick 1 out of 57 applicants to move on to the second part of the hiring process. The CEO of the company (my boss) says I'm too rigid with the requirements and too tough with the applicants... I don't know if the expectation is to lower my standards (hard for me) or keep hunting for the right person and sacrifice the lost time for the quality and productivity the jewel will bring when it appears. In the meantime I can keep looking through a pile of stupid resumes full of lies and incredible fictitious accomplishments. This is the process I'm going through for the hiring process. As I said, one position is for a Junior programmer and another one for an experienced engineer.

  • Filter 1: Job applications go first to another guy that filters out the obvious mismatches based on non-technical stuff, such as their VISA status, ability to speak English fluently and so on.
  • Filter 2: Then, this guy sends the resumes to me and I take out those that are really really obvious mismatches based on their previous experience and field of interest. Here is kinda crazy what you find; people that graduated in 1976 from computer science and for the last 20 years they've been a manager assistant at Walmart. I mean, seriously?
  • Phone Interview: At this point we are down about 50% of the initial applicants. Then I order their profiles by the "interesting" factor and start calling them in order. I go through the same script with each one of them, and try to cut them short when I see is not going to work. Here is my script for the phone interview.


  1. Chat about the company, what we do and what type of job the candidate is going to be doing.
  2. Chat about the candidate. Get him/her to relax an feel comfy talking to me.
  3. Question about most recent project he/she worked on. Specifics.
  4. Three questions about OOP.
  5. Three questions about programming languages (references, heap and object creation and disposal).
  6. Three questions about .NET Framework essentials.
  7. Are you satisfied? (the applicant)
  8. Questions and Answers.


  • In-Person Interview: If the person passes the phone interview with a positive outcome, then he/she is invited to the office to continue the screening. An appointment is set.
  • A Day at the Office: the candidate comes to the office. I make sure to ask him/her the same questions I thought they could've been answered better in the phone interview. I chat with them a little bit and introduce the candidate to the team members and people they'll be working with. 2 or 3 of the software engineer team members will actually do like a little personality screening in private with the candidate with the goal of helping me determine if they think it'll be a good addition to the team. They can either give me thumbs up or thumbs down and tell me why they think it may or may not be a good addition to the team. I value a lot the criteria of my team, ultimately they are going to be working together and nobody wants a week link or a introverted soul in the group that doesn't know how to play team sports.
  • The test: After the office showing and team interview and more questions, it comes the written test. IT IS INCREDIBLE HOW MANY PEOPLE WILL PASS THE PREVIOUS PHASES JUST TO FAIL HERE. The test is usually something very generic but that requires a minimum knowledge required for the kind of job we do (and that they are applying for). This time around the exercise is to create a remote application HTTP accessible with a function that, given an integer it returns the second consecutive prime greater than the integer. The second part of the exercise is to create a separate client application that uses such method and provides a minimal UI to use it. I give them 45 minutes to complete and another 10 minutes to explain to me what they did. So far none, not even 1 candidate have completed the such simplistic task in 45 minutes. DAMN IT! I've encountered myself explaining to recent grads and candidates with Masters what a PRIME number is!!! BOOOOOOOOOO!!! WTF is going on?!?
  • Negotiations and Hiring: I've only considered 1 person as a possible candidate so far to fill in the Junior position. Negotiations are straight and very much depends on what skills and drive the candidate shows during the interview process. If I'm sure you are the right person for the job, you'll leave the office knowing that and with a competitive offer in your hand. If you are kinda there, but never got to convince me about you, BUT made a strong point about your will to learn and future, I'll put you in the back-up bucket. The rest of the world will receive an email with a sorry and a good luck on their job search.

At this point there has to be something I'm doing wrong. I cannot believe there are no worthy software engineers out there looking for a job. Is it that the interview is too tough? Am I expecting too much from the marketplace?



Ahhhh.... RIA Services!!!

OK, so I started going down the road of creating a silverlight web application using the Silverlight Business Application template of Visual Studio 2010. After about 4 hours trying to configure things straight with the authentication credentials and styling on the application, I decided to give up and start with my own WCF-from-scratch entity service layer to serve data to my clients. Cheez, sometimes you really need simplicity. RIA Services provides a lot of nice wrappers with the Domain Services WCF wraps, but the template itself is not very easy to adapt to your own model. Maybe later I’ll take a second look at the RIA template with VS 2010, but for now, I’m going rogue old style project by project.



New product coming...

I'm starting a new project/product now. Let's see how it goes, I have very high expectations with this new product and I'll let things slip thru as it takes shape. I still have to make a new company for it, so I can get some tax benefits too. I keep postponing the company creation thing for tomorrow and then for tomorrow. One more time, I'll do it tomorrow (and this time I'll really do it). I'll kick things up with .NET 4.0 and some Entity Framework. This is my first time working with the ADO.NET entity framework and I like very much what I've seen so far. Initially I thought to use something different from .NET, like RoR (Ruby) or Django, the Python Web framework; but I ended up going back to the land I know the best: .NET world. The reason was simply the speed at which I can work and produce code is important at this time, since I'm the only one writing code on this thing.

I'll try to keep posting about my progress here and the new challenges I face as I move forward.