Viewing entries tagged
programming

2 Comments

Library Oriented Architecture

Library Oriented Architecture Icon

Library Oriented Architecture may sound like yet another buzzword in the software arena, but one that is not properly documented as of yet. It is not a common term and certainly far from the widely popular SOA or Service Oriented Architecture. Since there is no formal definition on the term LOA, I’m going to take a stab at it:

“Library Oriented Architecture defines the methodology for creating software components in the form of reusable libraries exclusively constrained to a specific domain ontology.”

What does it mean? Well, the part about ontology I’m not going to drill too deeply into that, in a nutshell “don’t confuse a contact with a user, they belong to different domain ontologies” (I wrote a different article about it HERE). In this piece we are going to drill down into the software piece, the separation of concerns, and how to define a practical framework to create things in the right places.

I caught the term LOA for the first time at SuperConf 2012 in Miami. Richard Crowley came to the stage and threw the new term at the crowd and got back a few long faces in return. Richard’s own words, when referring to the Library-Oriented approach:

Package logical components of your application independently – literally as separate gems, eggs, RPMs, or whatever- and maintain them as internal open-source projects… This approach combats the tightly-coupled spaghetti so often lurking in big codebases by giving everything the Right Place in which to exist.

His talk was very solid and I recommend everyone with a hard-core-techie-heart to spare a few minutes on it. You can find his reflections about developing interoperability HERE.

It caught my attention just by the name, because I’ve been saying, “It’s like SOA, but with libraries” for some time now. “It’s like SOA, but with libraries” always came up when I was trying to explain an architectural pattern for building solid systems and frameworks. In general, LOA is just a way of thinking about software engineering. Library Oriented Architecture defines the structuring of libraries for domain ontologies and it has 3 basic principles:

  1. A software library implementation and subject area expertise must be constrained to only 1 ontology domain.
  2. A software library that needs to use concepts and artifacts from a different ontology domain than the one it belongs to, must interface and reuse the library corresponding to that specific ontology domain.
  3. All domain specific software libraries must be maintained and supported with separate lifecycles.

Before we get into the weeds here, we ought to ask ourselves: Why in the world do we need a new term, or a new architecture, or a new anything in software engineering? Well, we don’t, but if you care to write badass apps and software systems that can evolve gracefully with time, this can turn out to be a very good road to take. For those who enjoy bullet points, here are some of the motivations to explore LOA a bit further:

  1. Simplify configuration management of distributed systems.
  2. Build highly reliable software systems because of the inherent properties of the LOA principles.
  3. Increase the Maintainability Index of your distributed systems and integration repositories.
  4. Minimize the risk of high coupling, especially for large systems (read Writing Elegant Code and the Maintainability Index).
  5. Bring developers up to speed orders of magnitude more quickly than a traditional system. Move developers and teams across libraries and domain ontologies and collaborate seamlessly.
  6. Spot bugs and zero-in on the problem almost instantly. There is something to be said about the amount of time a developer spends debugging.
  7. Maximization of the Bus Factor of the software engineering team.
  8. Information Systems build using LOA are technology-independent, and have the ability to entire libraries and domain implementations with localized impact and minimal upstream ripple effect.

Ok, enough reading, let’s see how this materializes in a diagram.

Library Oriented Architecture

Note that this is a specific implementation of Library Oriented Architecture for compiled libraries. You can adapt this to your own needs for scripted languages and even mix it around however you want. For the sake of simplicity, we’ll stick to this sample for now.

The second thing I want to note here is that the diagram is not describing how to implement LOA. It simply lays the foundations for a software engineering practice that happens to follow LOA principles. I’m sharing this because I think is useful and maybe someone will like it enough to offer some suggestions to improve it further.

I want you to notice a couple of things that are illustrated on the diagram:

  1. All 3 principles mentioned above are followed.
  2. The framework favors convention over configuration. Lib names, namespace naming and schema conventions are noted in the last column.
  3. You can clearly dissect the domains vertically and they span all the way from the data storage layer to the actual library implementing the domain specific logic.
  4. A library representing an ontology domain never interfaces with the data-sources, or even data access layer, from any other domain; instead it interfaces directly with the library representing that domain.
  5. Services are merely wrappers of libraries, with minimal or no business logic other than the orchestration of the libraries it needs in order to fulfill its function.
    • This is important because services are always tightly coupling their technology implementations and serialization mechanisms (WCF, ASMX, SOAP, REST, XML, etc.)
    • Part of the service implementation concern is usually dealing with this technology-specific fuzz that is unrelated to the actual business functionality the service is providing.
  6. Exception handing is bubbled up to the lib layer, such that we always get meaningful stack traces when debugging.
  7. Logging, as a cross cutting concern, should be manageable at all levels of the framework, however the domain deems necessary.
  8. If the implementations of the domain-specific libraries share a common framework, such as .NET or Java, they most likely have a superseded library set that extends each framework. For the example illustrated in the diagram, we called them framework infrastructure libraries, or Common Libs for short.

So, now that we have a framework for engineering our software needs, let’s see how to materialize it.

Suppose you are working on the next Foursquare, and it comes to the point where you need services that help you normalize addresses, and work with GIS and coordinates, and a bunch of other geo-location functions that your next-Foursquare needs.

It is hard sometimes to resist the temptation of the ‘just-do-it’ approach, where you ‘just’ create a static class in the same web app, change your Visual Studio web project to make an API call to 3rd party services, and start integrating directly to Google Maps, Bing Maps, etc. Then you ‘just’ add 5 or 6 app settings to your config file for those 3rd party services and boom, you are up and running. This approach is excellent for a POC, but it will not take you too far, and your app is not scalable to the point it could be with a Library Oriented approach.

Let’s see how we do it in LOA. In this world, it takes you maybe a couple of extra clicks, but once you get the hang of it, you can almost do it with your eyes closed.

  1. The Lib Layer
    1. Create a class library for the GEO domain ontology. Call it something like Geo.dll or YourCompany.Geo.dll. This library becomes part of your lib layer.
      • Deciding the boundaries of domain ontology is not an easy task. I recommend you just wing it at first and you’ll get better with time.
      • You need to read a lot about ontology to get an idea of the existential issues and mind-bending philosophical arguments that come out of it. If you feel so adventurous you can read about ontology HERE and HERE. It will help you understand the philosophical nature of reality and being, but this is certainly not necessary to move on. Common sense will do for now.
      • Just don’t go crazy with academia here and follow common sense. If you do, you may find later that you want to split your domain in two, and that is OK. Embrace the chaos and the entropy that comes out of engineering for scalability, it is part of the game.
    2. Define your APIs as methods of a static class, and add a simple[sourcecode language="csharp"]throw new NotImplementedException("TODO");[/sourcecode]
    3. Write your Unit Tests towards your APIs with your assertions (Test Driven Development practice comes handy here).
  2. The DAL Layer
    1. Sometimes your ontology domain does not need to store any data. If that is the case, skip to step 3, else continue reading.
    2. Create a new library for the GEO domain data access layer. Name it according to the convention you previously setup in your company and dev environment. For this example we’ll call it GeoDal.dll
    3. Using your favorite technique, setup the data access classes, mappings and caching strategy.
      • If your persistent data store and your app require caching, this is the place to put it. I say if, because if you choose something like AWS Dynamo DB where 1 MB reads take between 1 and 10 milliseconds, maybe you want to skip cache altogether for your ‘Barbie Closet’ app :)
      • Memcached, APC, redis, AppFabric, your custom solution, whatever works for you here.
      • You can also use your favorite ORM (NHibernate, Entity Framework, etc.) and they already come with some level of caching on them.
      • Bottom line, LOA does not have any principle preventing you from going wild here, therefore your imagination and experience are the limit.
  3. The Data Layer
    1. For this exercise suppose we need to persist Addresses, Coordinates and Google Maps URLs.
    2. I suggest you scope your data entities by your domain ontology. A way we’ve found to work quite nicely is to use named schemas on RDBMS and setup namespace conventions for your NoSql databases.
    3. For the GEO domain schema, we used SQL Server and created a named security schema called [Geo]. The use of named schemas makes it easy to avoid long table names, provides nice visual grouping of entities and a more granular security for your entities.

When it comes to data modeling, another technique I like to use is that of unaltered historical event data. Any ontology domain can be dissected into 3 purpose-specific data models: Configuration Data, Event Data, and Audit Data. They all serve very different purposes and in general we like to keep them in separate schemas with separate security, this way we’re not comingling concerns. Each concern has a different DAL library and potentially they all interface with the library representing the domain at the Lib Level. This post is already way too long, I’ll try to cover some more data modeling strategies in future posts.

Now that we have a clearly separated domain library for our GEO domain, we can decide to wrap with whatever technology specific services we need. This is very convenient because when you want to move your SOA stack to a different technology, you don’t have to re-write your entire domain infrastructure, only the service layer. More importantly, it allows for greater scalability, since it degrades gracefully and plays nicely with different frameworks and technologies. A well implemented Library Oriented Architecture can be said to be technology-agnostic, and that makes it a great SOA enabler.

That’s it for this episode folks. Send me your comments or emails if you are using Library Oriented Architecture, or if you have any suggestions on how to improve the methodology or framework.

Happy coding!

2 Comments

Comment

The Myth of the Genius Programmer

http://www.youtube.com/watch?v=0SARbwvhupQ&list=PLCB5CF9838389D7F6&feature=view_all

From Google I/O 2009, here are Brian Fitzpatrick, Ben Collins-Sussman about the fears of programmers and the fear of looking 'stupid'.

Comment

1 Comment

Must-Have Visual Studio Extensions

I'm just setting up a new dev box now and there are always some things, like extensions and tools, I feel they are a must-have for Visual Studio 2010 devs. Some of these extensions like the Web Essentials have already made it to Visual Studio 11 (still in Beta), but I still wanted to share my preferences for my must have VS extensions:
  1. ReSharper
  2. NuGet Package Manager
  3. Productivity Power Tools
  4. Web Essentials
  5. Image Optimizer
  6. Javascript Parser

There is a cool Channel 9 vid where Mads Kristensen does a walkthrough of some of the goodies in his Web Essentials and the Image Optimizer. If you are in the web dev space, you should check it out: http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Visual-Studio-Toolbox-Web-Essentials-and-CSSCop

1 Comment

2 Comments

Configuring ReSharper Code Analysis

This article is supporting the article "Definition of DONE" with regards of code cleanup.

Having your code CLEAN is not only elegant, but also a good practice and habit to have specially if you work in a team environment. Every developer have their own styles, likes and dislikes when it comes to code formatting, ordering, grouping regions, etc; precisely because of this, is always good to get the team together, crack-open some Red Bulls and come up with your code standards and practices (and while you are at it, put them up on the Wiki). You can then use automation tools for your IDE to speed up the process and make sure everybody is generating and organizing their code files using the same rules. In my case, at work, we are living in the .NET world and ReSharper 4 or later in Visual Studio does the trick perfectly for us.

ReSharper has a feature called "Code Cleanup" that accomplishes exactly that: it automates the suggestions from the "Code Analysis" and applies them to code files. You can remove code redundancies, reorder type members and remove redundant using directives among many other things. This is not the only one of the many useful things ReSharper offers to .NET developers, but is certainly one I use the quite often. Let's see how it works.

At the time of this article I'm using VS 2010 and ReSharper 5.0, so all images are representative only of such software and version.

Make it perfect in one machine

The first thing is to configure all the settings in ReSharper on one machine and then share your settings. There is plenty to go through and each team may have different requirements. For example, by default ReSharper has the inspection severity on the unit "Redundant 'this.' qualifier" to "Show as warning" and in my team WE DO LIKE to use the 'this.' qualifier regularly, so we changed the R# setting to make "Show as hint". Things of this magnitude does not impact performance in code execution, the compiler simply ignores it; and they are more of a "style" type of inspection you can always tailor to your specific needs.

Once you have your Dev Box all configured, simply export your settings so the rest of the team can use them.

You can download the style we are currently using HERE.

That is the official way of sharing things using ReSharper, BUT... there is a small problem: that only shares the code inspection rules of the ReSharper settings and not other[important] things like the "Code Cleanup" settings. Breathe with ease ninja, there are solutions.

  1. If you have ReSharper 5.1 => you can use Resharper Setting Manager. This is an open source tool (CodePlex hosted) that allows you share ALL of your R# settings very easily. Officially you need R# 5.1 for this tool; I've tried it with R# 5.0 and have had a success rate of 0.5 (there is some black magic going on there that makes it work only half of the time with R# 5.0).
  2. Go old school and just copy the settings straight from the application data on your installation folder. Just navigate to  %appdata%JetBrainsReSharperv5.0vs10.0 and copy the files UserSettings.xml and Workspace.xml to your repository (or a share folder, or email, or... ). You can then get those settings from other dev machines and put them in their respective places (close VS first).

Now when you open VS on the other dev machines you'll find the same configuration and code cleanup settings as the original box. Sweet!

Using Code Cleanup

When using VS with a source code file open, the code inspection engine will always notify you of how well (clean) your code is doing with regards to your R# settings. The keyboard shortcut to run the code clean up is Ctrl+E+C or Ctrl+K+C depending on the shortcut schema layout you are using. After you run it, things will be looking better for you ;)

  • Red light means there is a BREAKING change on the code, and will probably not compile.
  • Orange light means the code doesn't have any breaking changes, but it can be improved. You'll get some warnings and some suggestions.
  • Green means you are good to go buddy. Ready for commit.

Part of our Definition of DONE is to get a green light on your source files. No exceptions.

You can find some more useful features of ReSharper like solution-wide analysis HERE.

2 Comments