Viewing entries tagged
object oriented programming

1 Comment

Polymorphism... back to school

Almost anyone with some notion about programming or object-oriented techniques will be able to describe what polymorphism is and how to use it; I've discovered recently that very few can tell you how it actually works. While this may not be relevant to the daily work of many software developers, the consequences of not knowing and thus its misuses, affect everyone. We all know how to describe a car and even how to use it, but only mechanics know how to fix it, because they know how IT WORKS.



Polymorphism: What it is

Polymorphism is a characteristic or feature of programming languages. Programming languages either support it or they don’t. Since programming languages fall under the umbrella of sometimes substantially different paradigms, in this article I’m going to concentrate in polymorphism within the scope of Object Oriented Programming.

In OOP polymorphism is considered one of the basic principles and a very distinctive one. Most of the object-oriented languages have polymorphism among its many features. In a nutshell, polymorphism is best seen when the caller of an object with polymorphic implementation is not aware of the exact type the object is. Polymorphism interacts very closely with other features like inheritance and abstraction.


Polymorphism: What it is NOT

Contrary to an overwhelming number of articles found online, things like overloading polymorphism or parametric polymorphism are NOT object-oriented expressions of polymorphism, they are applicable to functional (declarative) programming languages. Signature-based polymorphism, overloading polymorphism, parasitic polymorphism and polymorphism in closures are meaningful on functional languages only (or at best, multi-paradigm languages with functional expressions and/or duck-typing languages, like Python or JavaScript).

Polymorphism is not boxing or unboxing, and is not the same as inheritance; instead and usually polymorphic manifestations occur are a consequence of inheritance, sub-typing and boxing.


Polymorphism: How it works

As its name suggests, polymorphism is the ability of an object to have more than one form, or more properly to look as it is of some other type. The most common manifestation can be seen with sub-typing, let’s see it through an example:

[sourcecode language="csharp"] public class Fruit { public virtual string Description() { return "This is a fruit"; } }

public class FreshApple: Fruit { public override string Description() { return "Fresh Apple"; } }

public class RottenBanana:Fruit { public override string Description() { return "Banana after 10 days"; } } [/sourcecode]

Now we can do something like this:

[sourcecode language="csharp"] Fruit[] fruits = { new FreshApple(), new RottenBanana(), new Fruit() }; foreach (var fruit in fruits) { Console.WriteLine("Fruit -> {0}", fruit.Description()); } Console.ReadLine(); [/sourcecode]

... and that would have the following output:

That’s all well and good and almost anyone will get this far. The question is HOW does this happens, HOW the runtime realizes that the method is must call is the one in the child classes instead of that one in the parent class? How deep this rabbit hole goes? There is no magic element here and you’ll see exactly HOW this occurs and WHY it is important to know about it.

These mappings of function calls to their implementations happen through a mechanism called dispatch tables or virtual tables (vTables for short). Virtual Tables is a feature of programming languages (again, they either support it or they don’t). Virtual Tables are present mostly on languages that support dynamic dispatch (C++, C#, Objective C, Java), meaning they can bound to function pointers or objects at run-time, and they can abstract from the actual implementation of the method/function/object until the very moment they are going to use it.

The vTable itself is nothing more than a data structure, no different from a Stack or a Hashtable for their purpose; it just happen to be the one used for the dynamic dispatch feature in programming languages. Some languages offer a different structure called the Binary Tree Dispatch as an alternative to vTables. As any data structure, they both have pros and cons when it comes to measure performance and cost. The point is that whether it is a vTable or bTree Dispatch, dynamic dispatching is bound using a data structure that is carried over with objects and functions to support this feature. Yes, I said “carried over”.

vTables are a hidden variable of objects. In .NET for example, all classes and structs inherit from Object and Object already has virtual functions like "ToString()" and "GetHashCode()", so every single object you create will have a hidden vTable private structure that is always initialized behind the scenes in the constructor call of every object. The purpose of these vTables being created all over the place is exactly to map the correct functions for polymorphic objects and functions. You can use Reflector to peek over an object at runtime (IL) and you'll find its vTable in the first memory location of each object's memory segment. The IL functions call and callvirt (polymorphism, yay!) will be used to call non-virtual and virtual methods respectively. There is simply no way of accessing an object's vTable directly from C#; this was done on purpose to minimize the security implications of it among other reasons. In C++, hmm...

[sourcecode language="cpp"] long *vptr = (long *) &obj; long *vtable = (long *)*vptr; [/sourcecode]

So the vTable of each object will hold a reference to each one of its virtual functions. Think of it, structurally, as a list of function pointers sorted in the same order they are declared. This serves its purpose on inheritance and polymorphism when a child object gets initialized regardless being assigned to a variable of type parent, the vTable of that object will be the one corresponding the actual inheritor. When virtual functions get called in the variable it will find the correct function in the child objects (actual object type) thanks to the function pointers on their respective vTables.

Virtual tables, inheritance and polymorphism in general have been criticized for their overhead in program execution. That is why one of the object-oriented programming principles is "Favor Object Composition Over Inheritance". Non-virtual functions never require the overhead of vTables making them naturally faster than virtual functions. Applications and systems with a deep inheritance architecture tend to spend a considerably large amount of their execution time just trying to figure out the root path of their virtual functions. This can become quite a performance tax if used without care, specially in older CPU architectures. Most modern compilers will have a trick or two under their sleeves to attempt to resolve virtual function calls at compile time without the need of vTables, but dynamic binding will always need of these creatures to resolve their destination paths.

And now you know how polymorphism works. Happy coding!

1 Comment


Garbage Collection - Pt 2: .NET Object Life-cycle

This article is a continuation of Garbage Collection - Pt 1: Introduction. Like everything else in this world, objects in object-oriented programming have a lifetime from when they are born (created) to when they die (destroyed or destructed). In the .Net Framework objects  have the following life cycle:

  1. Object creation (new keyword, dynamic instantiation or activation, etc).
  2. The first time around, all static object initializers are called.
  3. The runtime allocates memory for the object in the managed heap.
  4. The object is used by the application. Members (Properties/Methods/Fields) of the object type are called and used to change the object.
  5. If the developer decided to add disposing conditions, then the object is disposed. This happens by coding a using statement or manually calling to the object’s Dispose method for IDisposable objects.
  6. If the object has a finalizer, the GC puts the object in the finalization queue.
  7. If the object was put in the finalization queue, the GC will, at an arbitraty moment in time, call the object’s finalizer.
  8. Object is destroyed by marking its memory section in the heap segment as a Free Object.

The CLR Garbage Collector intervenes in the most critical steps in the object lifecycle; GC is almost completely managing steps 3, 6, 7 and 8 in the life of an object. This is one of the main reasons I believe the GC is one of the most critical parts of the .NET Framework that is often overlooked and not fully understood.

These are the key concepts and agents that participate in the life of a .NET object that should be entirely comprehended:

The Managed Heap and the Stack

The .Net Framework has a managed heap (aka GC Heap), which is nothing more than an advance data structure that the GC uses when allocating reference type objects (mostly). Each process has one heap that is shared between all .NET application domains running within the process.

Similarly, the thread Stack is another advance data structure that tracks the code execution of a thread. There is on stack per thread per process.

They are often compared to a messy pile of laundry (GC heap) and an organized shoe rack (Stack). Ultimately they are both used to store objects with some distinctions. There are 4 things that we store on the Stack and Heap as the code executes:

  1. Value Types: Go on heap or stack, depending on where they were declared.
    • If a Value Type is declared outside of a method, but inside a Reference Type it will be placed within the Reference Type on the Heap.
    • If a Value Type is boxed it’ll be placed in the Heap.
    • If the Value Type is within an iterator block, it’ll be placed in the Heap.
    • Else goes in the Stack.
  2. Reference Types: Always go on the Heap.
  3. Pointers: Go on heap or stack, depending on where they were declared.
  4. Instructions: Always go on the Stack.

Facts about the Heap

  1. The managed heap is a fast data structure that allows for fast access to any object in the heap regardless of when was inserted or where it was inserted within the heap.
  2. The managed heap is divided into heap segments. Heap segments are physical chunks of memory the GC reserves from the OS on behalf of CLR managed code.
  3. The 2 main segments of the heap are:
    1. The Small Objects Heap (SOH) segment: where small objects (less than 85 Kb) are stored. The SOH is also known as the ephemeral segment.
    2. The Large Objects Heap (LOH) segment: where large objects (more than 85 Kb) are stored.
  4. All reference types are stored on the heap.

Facts about the Stack

  1. The Stack is a block of pre-allocated memory (usually 1MB) that never changes in size.
  2. The Stack is an organized storage structure where grabbing the top item is O(1).
  3. Objects stored in the Stack are considered ROOTS of the GC.
  4. Objects stored in the Stack have a lifetime determined by their scope.
  5. Objects stored in the Stack are NEVER collected by the GC. The storage memory used by the Stack gets deallocated by popping items from the Stack, hence its scoping significance.

Now you start to see how it all comes together. The last magic rule that glues them in harmony is that the objects the garbage collector collects are those that:

  1. Are NOT GC roots.
  2. Are not accessible by references from GC roots.

Object Finalizers

Finalizers are special methods that are automatically called by the GC before the object is collected. They can only be called by the GC provided they exist. The .NET ultimate base class Object has a Finalize method that can be overridden by child objects (anyone basically). The purpose of finalizers is to ensure all unmanaged resources the object may be using are properly cleaned up prior to the end of the object lifetime.

If a type has an implemented (overridden) finalizer at the time of collection, the GC will first put the object in the finalization queue, then call the finalizer and then the object is destroyed.

Finalizers are not directly supported by C# compilers; instead you should use destructors using the ~ character, like so:

The CLR implicitly translates C# destructors to create Finalize calls.

Facts about Finalizers

  1. Finalizers execution during garbage collection is not guaranteed at any specific time, unless calling a Close or a Dispose method.
  2. Finalizers of 2 different objects are not guaranteed to run in any specific order, even if they are part of object composition.
  3. Finalizers (or destructors) should ONLY be implemented when your object type directly handles unmanaged resources. Only unmanaged resources should be freed up inside the finalizer.
  4. Finalizers must ALWAYS call the base.Finalize() method of the parent (this is not true for C# destructors, that do this automatically for you)
  5. C# does not support finalizers directly, but it supports them through C# destructors.
  6. Finalizers run in arbitrary threads selected or created by the GC.
  7. The CLR will only continue to Finalize objects in the finalization queue <=> the number of finalizable objects in the queue continues to decrease.
  8. All finalizer calls might not run to completion if one of the finalizers blocks indefinetly (in the code) or the process in which the app domain is running terminates without giving the CLR chance to clean up.

More at MSDN.

Disposable Objects

Objects in .NET can implement the IDisposable interface, whose only contract is to implement the Dispose method. Disposable objects are only created explicitly and called explicitly by the developer, and its main goal is to dispose managed and unmanaged resources on demand (when the developer wants to). The GC never calls the Dispose method on an object automatically. The Dispose() method on a disposable object can only get executed in one  two scenarios:

  1. The developer invokes a direct call to dispose the object.
  2. The developer created the object in the context of the using statement

Given the following disposable object type:

The next alternatives are equivalent:


Facts about Disposable Objects

  1. An object is a disposable object if implements the IDisposable interface.
  2. The only method of the IDisposable interface is the Dispose() method.
  3. On the Dispose method the developer can free up both managed and unmanaged resources.
  4. Disposable objects can be used by calling directly the object.Dispose() method or using the object within a using statement
  5. The Dispose() method on disposable objects will never get called automatically by the GC.

More at MSDN.


Ok, so now that we know the key players, let’s look again at the life of a .NET object. This time around we can understand a bit better what’s going on under the carpet.

  1. Object creation (new keyword, dynamic instantiation or activation, etc).
  2. The first time around, all static object initializers are called.
  3. The runtime allocates memory for the object in the managed heap.
  4. The object is used by the application. Members (Properties/Methods/Fields) of the object type are called and used to change the object.
  5. If the developer decided to add disposing conditions, then the object is disposed. This happens by coding a using statement or manually calling to the object’s Dispose method for IDisposable objects.
  6. If the object has a finalizer, the GC puts the object in the finalization queue.
  7. If the object was put in the finalization queue, the GC will, at an arbitraty moment in time, call the object’s finalizer.
  8. Object is destroyed by marking its memory section in the heap segment as a Free Object.


PS:   A good friend has pointed out in the comments a good article by Eric Lippert titled "The Truth About Value Types". Go check it out!



Garbage Collection - Pt 1: Introduction

Garbage collection is a practice in software engineering and computer science that aims at the automation of memory management in a software program. Its origins date all the way back to 1958 and the Lisp programming language implementation (“Recursive Functions of Symbolic Expressions and their Computation by Machine” by John McCarthy), the first to carry such mechanism. The basic problem it tries to solve is that of re-claiming unused memory in a computer running a program. Software programs rely on memory as the main storage agent for variables, network connections and all other data needed by the program to run; this memory needs to be claimed by the software application in order to be used. But memory in a computer is not infinite, thus we need a way to mark the pieces of memory we are not using anymore as “free” again, so other programs can use them after me. There are 2 main ways to do this, a manual resource cleanup, or an automatic resource cleanup. They both have advantages and disadvantages, but this article will focus on the later since it’s the one represented by the Garbage Collectors.

There are plenty of great articles and papers about garbage collection theory. Starting in this article and with other entries following up, I’ll talk about the garbage collection concepts applied to the .NET Garbage Collector specifically and will cover the following areas:

  1. What is the .NET Garbage Collector? (below in this page)
  2. Object’s lifecycle and the GC. Finalizers and diposable objects  (read Part 2)
  3. Generational Collection (read Part 3)
  4. Garbage collection performance implications (article coming soon)
  5. Best practices (article coming soon)

What is the .NET Garbage Collector?

The .NET Framework Garbage Collector (GC from now on) is one of the least understood areas of the framework, while being one of the most extremely sensitive and important parts of it. In a nutshell, the GC is an automatic memory management service that takes care of the resource cleanup for all managed objects in the managed heap.

It takes away from the developer the micro-management of resources required in C++, where you needed to manually delete your variables to free up memory. It is important to note that GC is NOT a language feature, but a framework feature. The Garbage Collector is the VIP superstar in the .NET CLR, with influence to all .NET languages.

I’m convinced those of you that worked before with C or C++ cannot count how many times you forgot to free memory when it is no longer needed or tried to use memory after you've already disposed it. These tedious and repetitive tasks are the cause of the worst bugs any developer can be faced with, since their consequences are typically unpredictable. They are the cause of resource leaks (memory leaks) and object corruption (destabilization), making your application and system perform in erratic ways at random times.

Some garbage collector facts:

  1. It empowers developers to write applications with less worries about having to free memory manually.
  2. The .NET Garbage Collector is a generational collector with 3 generations (article coming soon).
  3. GC reserves memory in segments. The 2 main GC segments are dedicated to maintain 2 heaps:
    • The Small Objects Heap (SOH), where small objects are stored. The first segment of the small object heap is known as the ephemeral segment and it is where Gen 0 and Gen 1 are maintained.
    • The Large Object Heap (LOH), where Gen 2 and large objects are maintained.
  4. When GC is triggered it reclaims objects that are no longer being used, clears their memory, and keeps the memory available for future allocations.
  5. Unmanaged resources are not maintained by the GC, instead they use the traditional Destructors, Dispose and Finalize methods.
  6. The GC also compresses the memory that is in use to reduce the working space needed to maintain the heap
  7. The GC is triggered when:
    • The system is running low on memory.
    • The managed heap allocation threshold is about to be reached.
    • The programmer directly calls the GC.Collect() method in the code. This is not recommended and should be avoided.
  8. The GC executes 1 thread per logical processor and can operate in two modes: Server and Workstation.
    • Server Garbage Collection: For server applications in need of high throughput and scalability.
      • Server mode can manage multiple heaps and run many GC processes in different threads to maximize the physical hardware capabilities of servers.
      • Server GC also supports collections notifications that allows a server farm infrastructure where the router can re-direct work to a different server when it’s notified that the GC is about to perform collections, and then resume in the main server when GC finishes.
      • The downside of this mode is that all managed code is paused while the GC collection is running.
    • Workstation Garbage Collection: For client applications. This is the default mode of the CLR and the GC always runs in the workstation mode unless otherwise specified.
      • On versions of the framework prior to 4.0, GC used 2 methods of workstation collection: Concurrent and Non-Concurrent where concurrent GC collections allowed other managed threads to keep running and executing code during the garbage collection.
      • Starting in 4.0 the concurrent collection method was replaced (upgraded?) by a Background Collection method. The background collection method has some exiting performance implications (good ones) by adding support to run generations of the GC (including gen2) in parallel, something not supported by the concurrent method (article coming soon).

Here is an interview made in 2007 to Patrick Dussud, the architect of the .NET Garbage Collector: Channel 9 Video

Next, I'll cover up the .NET object lifecycle and the role of the GC in the lifecycle of an object.



Writing Elegant Code and the Maintainability Index.

Every time I come across a procedure or code file with a 1000 lines of code I just want to sentence the creator to permanent programming abstinence.  To write elegant and maintainable code there are a couple of things you'll want to consider (and learn, if not known). This article will cover some of them, with a focus on Object Oriented Programming. First we shall find a good motive to do this. Why would we want to write elegant code? What is our motivation for code metrics?

  • Improve software quality: Well-known practices on your code will (with high probability) make software more stable and usable.
  • Software Readability: Most software applications are not the sole creation of an individual. Working with teams is always a challenge; code metrics allows teams to standardize the way they work together and read more easily each other’s code.
  • Code flexibility: Applications with a good balance of cyclomatic complexity and design patterns are more malleable and adaptable to small and big changes in requirements and business rules.
  • Reduce future maintenance needs: Most applications need some sort of review or improvement on their lifetime. Coming back to code written a year ago will be easier if the code have good metrics.

To have a good maintainability index in your code you should have a couple of metrics measured up. Ultimately the maintainability index is what works for your specific case.  This is not a fixed industry set of rules, but rather a combination of them that works for the requirement of your organization application and team. Let’s take a look at what I PERSONALLY CONSIDER good metrics for the ultimate maintainability index of software applications:

Cyclomatic Complexity (CC)

  • Cyclomatic Complexity (CC)
  • Very popular code metric in software engineering.
  • Measures the structural complexity of a function.
  • It is created by calculating the number of decision statements and different code paths in the flow of a function.
  • Often correlated to code size.
  • A function with no decisions has a CC = 1, being the best case scenario.
  • A function with CC >= 8 should raise red flags and you should certainly review that code with a critical eye. Always remember, the closest to 1 the better.

Lines of Code (LOC)

  • LOC is the oldest and easier to measure metric.
  • Measures the size of a program by counting the lines of code.
  • Some recommended values by entities (for .NET Languages):
    • Code Files: LOC <=600
    • Classes: LOC <=1000 (after excluding auto-generated code and combining all partial classes)
    • Procedures/Methods/Functions: LOC<=100
    • Properties/Attributes: LOC <=30
  • If the numbers on your application show larger number than the previous values, you should check your code again. A very high count indicates that a type or a procedure it trying to do too much work and you should try to split it up.
  • The higher the LOC numbers the harder the code will be to maintain.

Depth of Nesting

  • Measures the nesting levels in a procedure.
  • The deeper the nesting, the more complex the procedure is. Deep nesting levels often leads to errors and oversights in the program logic.
  • Avoid having too many nested logic cases, look for alternate solutions to deep if then else for foreach switch statements; they loose logic sense in the context of a method when they run too deep.
  • Reading: Vern A. French on Software Quality

Depth of Inheritance (DIT)

  • Measures the number of classes that extend to the root of the class hierarchy at each level.
  • Classes that are located too deep in the inheritance tree are relatively complex to develop and maintain. The more of those you have the more complex and hard to maintain your code will be.
  • Avoid too many classes at deeper levels of inheritance. Avoid deep inheritance trees in general.
  • Maintain your DIT <= 4. A value greater than 4 will compromise encapsulation and increase code density.

Coupling and Cohesion Index (Corey Ladas on Coupling)

  • To keep it short, you always should: MINIMIZE COUPLING, MAXIMIZE COHESION
  • Coupling: aka “dependency” is the degree of reliance a functional unit of software has on another of the same category (Types to Types and DLLs to DLLs).
    • The 2 most important types of coupling are Class Coupling (between object types) and Library Coupling (between DLLs).
    • High coupling (BAD) indicates a design that is difficult to reuse and maintain because of its many interdependencies on other modules.
  • Cohesion: Expresses how coherent a functional unit of software is. Cohesion measures the SEMANTIC strength between components within a functional unit. (Classes and Types within a DLL; properties and methods within a Class)
    • If the members of your class (properties and methods) are strongly-related, then the class is said to have HIGH COHESION.
    • Cohesion has a strong influence on coupling. Systems with poor (low) cohesion usually have poor (high) coupling.

Design Patterns

  • Design Patterns play an extremely important role in the architecture of complex software systems. Patterns are proven solutions to problems that repeat over and over again in software engineering. They provide the general arrangement of elements and communication to solve such problems without doing the same twice.
  • I can only recommend one of the greatest books in this subject called: “Design Patterns: Elements of Reusable Object-Oriented Software” by the Gang of Four. Look it up online, buy it, read it and I’ll guarantee you’ll never look at software architecture and design the same way after.
  • Depending on the needs of the problem, applying proven design patterns like Observer, Command and Singleton patterns will greatly improve the quality, readability, scalability and maintainability of your code.

Triex Index (TI - Performance Execution)

  • Triex Index (TI) measures the algorithmic time complexity (Big O) of object types.
  • TI is more important to be considered for object types that carry a large load of analysis, like collections and workers objects.
  • This metric receives a strong influence from Cohesion and should only be applied to classes and types and not to individual members. The coherence is also very influential on the time complexity coherence between its members.
  • This is a personal metric. I haven't found this type of metrics elsewhere, but I think it's singular and significant to the overall elegance and performance of your code.
  • The Triex Index is calculated by dividing the infinite product of the execution order (Big O) for all members of a class, by n to the power of c-1; where c is the number of members in the class/type.
    • TI > n2 -  Bad. Needs algorithm revision
    • n<TI<n2 -  Regular.
    • TI<=n          -  Good
  • If a member of a class is hurting the overall TI of the class, try splitting its logic into one or more methods with less costly execution orders. Be careful not to harm the coupling and cohesion metrics of the type while doing this step.

The same way I have my preferences, I have my disagreements with some colleagues when it comes to other known code metrics. I consider some of this code metrics a waste of time when measuring code elegance and maintainability:

  • Law of Demeter (for Functions and Methods)
    • The Principle of Least Knowledge is a design guideline I’ll always encourage to use. What I consider unnecessary is the “use only one dot” rule enforcement for functions and methods.
    • “a.b.Method()” breaks the law, where “a.Method()” does not. That's just ridiculous.
  • Weighted Methods Per Class (WMC)
    • Counts the total number of methods in a type.
  • Number of Children
    • Counts the number of immediate children in a hierarchy.
  • Constructors defined by class
    • Counts the number of constructor a class have.
  • Kolmogorov complexity
    • Measures the computation resources needed to represent an object
  • Number of interfaces implemented by class
    • Counts the number of interfaces a class implements.

Elegant Code

There are 2 main reasons I do not like to consider these metrics when looking at maintainability and code elegance:

  1. Many of them are obsolete metrics when we look at modern software engineering techniques and languages like C# and LINQ. Things like methods per class or number of children do not apply very well to the core concepts of these modern techniques. Just imagine measuring the Weighted Methods Per Class” in a world full of extension methods that ultimately do not depend on the original creator of the object type.
  2. The second reason is that the complexity of the problem never changes. If we have to solve a problem that by nature is a complex one, it doesn’t matter how many constructors of methods we have, or whether the functions do not abide by the Law of Demeter. That is irrelevant if the solution does not solve the problem. The complexity of any given problem is a constant; the only way out is to change the perspective to the problem and abstract as much complexity as possible. When you abstract a complex problem, you end up with a large number of abstractions. Counting them is meaningless, BECAUSE THE PROBLEM IS A COMPLEX ONE and ITS COMPLEXITY WILL NOT CHANGE.

It’ll be extremely hard to come up with good numbers for all of these metrics. Yet, you should know about them and the reason for their existence. Then, when you are having a developer saying “Holy cow, I don’t understand the logic of this method, it has too many if then else, I’m lost.” you’ll say “Aha! that method may have a high CC” and go straight to the problem and solve it. Applying metrics to your source code is not magic, you still own what you write, and ultimately you have to know your business well to have elegant code. But hopefully with the help of these metrics and a couple of tools you'll make your code work like a charm and look neat as a pin.



Rx for .NET... make it the standard!

Microsoft DevLabs its been cooking some very very cool extensions called the Reactive Extensions (Rx for short) to make the asynchronous programming model more simplistic by using the observer design pattern more efficiently through code.

Since the beginning the observer pattern it's been supported by the framework using traditional delegates and event handlers. Previous to the CLR 4.0, the was no IObserver or IObservable interfaces as part of the .NET Framework Class Library (FCL) and it was certanly one of the MOST notable missing legs of it. This was looooong overdue; almost every other main stream language provides classes to implement the pattern and now we can include the .NET languages as well.

But let's jump back into Rx. These guys took la crème de la crème, some duality concepts, object and function composition, the observer pattern and the power of LINQ and created one set of AWESOME extensions. In their own words:

Rx is a library for composing

asynchronous and event-based programs

using observable collections.

In the synchronous world communication between processes does not have any buffer, and processes wait until the data between them has been completed computed and transferred; this means that you could have a sluggish DB slowing the entire process down or any link on the 2-way communication for that matter (network card, OS, firewall, internet connection, web server, db server, etc).  In the asynchronous programming world instead of waiting for things to be finished, your code stands from the point of view of a subscriber. You are waiting for somebody to notify you there is something there to be taken immediately, and then you can take action on the subject. This is very convenient because you are never blocking any calling thread and your programs (specially those with a UI thread) are very responsive never die waiting for a response or consuming CPU cycles. There are many tools, namespaces and helper classes .NET provides out of the box to support async programming such as the ThreadPool, BackgroundWorker, Tasks, etc. What is the problem then? What's the reason for Rx existence?

The basic problem the Rx team is trying to solve is that was of the lack of clarity and simplicity that asynchronous model has attached to it. Working today in an async world is way too complicated, you have to create a bunch of "support garbage code" (as I call it) just to perform the simple task of waiting for the async call to return so you can do something with the result. It resolved the issues with .NET events, such as their lack of composition, resource maintance (you have to do -= after you finish), the event args always acts as a unsafe container for the real data source triggering the event, and lastly you cannot pass around in an object an event and manage it elsewhere. Events are not first-class .NET citizens, they do not share the same benefits are regular reference objects, you cannot assign it in a field or put it in an array, yadayada.

A simple example is suppose you have a combobox and want auto-complete behaviour in the combobox, but your word dictionary is on a remote web service and you want the user to see the first 20 matches in the dictionary filtering as they type (like any other mayor browser's search box today). Let's see what we have:

On the server

[sourcecode language="css"] [WebMethod Description="Returns the top 20 words in the local dictionary that matches the search string"] public List<string> FindTop20Dict(string searchString, string matchPattern) {...} [/sourcecode]

Now on the client.

[sourcecode language="csharp"] /*Then on the client we have to implement the TextChanged event and then 2 timers with their ticks (to give some room between keypress and keypress, say 0.5 seconds) and then we have to make sure we cancel the previous call if a new keypress is made. Manually. In a comparison loop using the timers :| Also you have to check if the text actually changed, because if you do Shift+Left and type the same selected char, the event will fire even when the text DID NOT CHANGED... ufff a lot of things to consider here and then we have to assign the result to the combobox items. All this while we had a thread locked waiting for the result from the server. You get the idea of how hard it is and how error prone*/ [/sourcecode]

I did not write the actual code because it'll take quite a bit of chunk of the post, but what I can do is to write the code that'll accomplish the same thing in the client using Rx extensions.

[sourcecode language="csharp"] //Grab input from textchanged var ts = Observable.FromEvent<EventArgs>(txt, "TextChanged"); var input = (from e in ts select ((TextBox)e.sender).Text) .DistinctUntilChanged() .Throttle(TimeSpan.FromSeconds(0.5)); // //Define the syntax of your observer var svc = new MyWebService(); var match = Observable.FromAsyncPattern<string, string, List<string>> (svc.BeginFindTop20Dict, svc.EndFindTop20Dict); var lookup = new Func<string, IObservable<List<string>>>(word => match(word, "prefix")); var result = from term in input from words in lookup(term) select words // //Subscribe and wait will the magic happens using (result.Subscribe(words => { combo.Items.Clear(); combo.Items.AddRange(result.ToArray()) ; } [/sourcecode]

As you can see using Rx it is possible to read what is going on in plain English, it is more efficient (since it is optimized and using best practices) and it'll make your code more elegant and legible when coding in the async world.

If you want to find out more about Reactive Extensions (Rx) check out the links:

Rx channel in Channel 9 at:

Rx DevLabs page:

Rx Team Blog:

As of today 11/17/2010 you can download the Rx extensions from DevLabs and play and create beautiful code with this extensions. Give them a try and let's just hope they are include in .NET 4.5 as the standard way of working with the asynchronous programming model.



On Hiring: One in the bush... too much to ask?

Today almost any tech enthusiast can say "I know how to program in C#" or "I know how to create awesome websites using RoR". With the evolution and abstraction of programming languages, it is true the knowledge required to make a program writing code on your own has become increasingly easy. My aunt, a 58 year-old catholic woman with 5 children and 8 grand-children just made her own website using nothing else than Dreamweaver. The point I want to make is "with popularity also comes mediocrity". There are a lot of self-titled software engineers in the job market that cannot tell you what a pointer is, nor can explain the basis or recursion. Many of these people make it their goal to memorize and learn a specific IDE, framework or language and then they themselves software engineers. Hey, when I was a kid, I built a tree house with my friends and... IT NEVER CRACKED... I'm a genius! I'm going to apply for an architect position to build the next Trump Tower!

As I'm writing this post, I'm going thru the process at work of interviewing some candidates for 2 openings, one as a Junior Software Developer and another as a Software Engineer with substantial experience and hands on project management. I'm living a nightmare. This is the second time I've had to hunt for people to add to the team, and I'm surprised on how many applicants are out there applying for positions they really cannot fill. After almost 10 days reading resumes, making phone calls and talking to many applicants; I've only been able to handpick 1 out of 57 applicants to move on to the second part of the hiring process. The CEO of the company (my boss) says I'm too rigid with the requirements and too tough with the applicants... I don't know if the expectation is to lower my standards (hard for me) or keep hunting for the right person and sacrifice the lost time for the quality and productivity the jewel will bring when it appears. In the meantime I can keep looking through a pile of stupid resumes full of lies and incredible fictitious accomplishments. This is the process I'm going through for the hiring process. As I said, one position is for a Junior programmer and another one for an experienced engineer.

  • Filter 1: Job applications go first to another guy that filters out the obvious mismatches based on non-technical stuff, such as their VISA status, ability to speak English fluently and so on.
  • Filter 2: Then, this guy sends the resumes to me and I take out those that are really really obvious mismatches based on their previous experience and field of interest. Here is kinda crazy what you find; people that graduated in 1976 from computer science and for the last 20 years they've been a manager assistant at Walmart. I mean, seriously?
  • Phone Interview: At this point we are down about 50% of the initial applicants. Then I order their profiles by the "interesting" factor and start calling them in order. I go through the same script with each one of them, and try to cut them short when I see is not going to work. Here is my script for the phone interview.


  1. Chat about the company, what we do and what type of job the candidate is going to be doing.
  2. Chat about the candidate. Get him/her to relax an feel comfy talking to me.
  3. Question about most recent project he/she worked on. Specifics.
  4. Three questions about OOP.
  5. Three questions about programming languages (references, heap and object creation and disposal).
  6. Three questions about .NET Framework essentials.
  7. Are you satisfied? (the applicant)
  8. Questions and Answers.


  • In-Person Interview: If the person passes the phone interview with a positive outcome, then he/she is invited to the office to continue the screening. An appointment is set.
  • A Day at the Office: the candidate comes to the office. I make sure to ask him/her the same questions I thought they could've been answered better in the phone interview. I chat with them a little bit and introduce the candidate to the team members and people they'll be working with. 2 or 3 of the software engineer team members will actually do like a little personality screening in private with the candidate with the goal of helping me determine if they think it'll be a good addition to the team. They can either give me thumbs up or thumbs down and tell me why they think it may or may not be a good addition to the team. I value a lot the criteria of my team, ultimately they are going to be working together and nobody wants a week link or a introverted soul in the group that doesn't know how to play team sports.
  • The test: After the office showing and team interview and more questions, it comes the written test. IT IS INCREDIBLE HOW MANY PEOPLE WILL PASS THE PREVIOUS PHASES JUST TO FAIL HERE. The test is usually something very generic but that requires a minimum knowledge required for the kind of job we do (and that they are applying for). This time around the exercise is to create a remote application HTTP accessible with a function that, given an integer it returns the second consecutive prime greater than the integer. The second part of the exercise is to create a separate client application that uses such method and provides a minimal UI to use it. I give them 45 minutes to complete and another 10 minutes to explain to me what they did. So far none, not even 1 candidate have completed the such simplistic task in 45 minutes. DAMN IT! I've encountered myself explaining to recent grads and candidates with Masters what a PRIME number is!!! BOOOOOOOOOO!!! WTF is going on?!?
  • Negotiations and Hiring: I've only considered 1 person as a possible candidate so far to fill in the Junior position. Negotiations are straight and very much depends on what skills and drive the candidate shows during the interview process. If I'm sure you are the right person for the job, you'll leave the office knowing that and with a competitive offer in your hand. If you are kinda there, but never got to convince me about you, BUT made a strong point about your will to learn and future, I'll put you in the back-up bucket. The rest of the world will receive an email with a sorry and a good luck on their job search.

At this point there has to be something I'm doing wrong. I cannot believe there are no worthy software engineers out there looking for a job. Is it that the interview is too tough? Am I expecting too much from the marketplace?