Tuesday, December 15, 2009

Consume RestFul Service as Object using WCF.


As an enterprise software engineer, i deal with disparate systems. I deal with several impedance mismatches between these systems, i built several integration components to glue these systems together. Many of these systems have its domain semantics (They speak different languages ), my daily work life activities is to build abstractions over semantics of different and disparate systems. I am one of those software engineers that believe in domain engineering paradigm , separation of concerns (A dog is a dog not a dogcat) , inversion of control e.t.c I enjoy spending time on a solution to see how it could be done better.My daily work is so challenging that bringing systems together from multiple domain is my strength and i have used aspect orientation to my advantage.

Recently, i entered a problem, we have an application that exposes objects as pure xml , the application supports soap 11 . The soap envelope does not have an action (Simply put in in WCF terms, it does not have an OperationContract , so when you consume this service, there is no web service operations to call on it ), This application only accepts pure XML as request and returns xml as response.

This is chaos, i know some hackers would say there is no biggy here, just construct an string as xml, send it over to the REST service and get the response as string and use linq or manually tranverse the string and construct an object from it. Wao! this indeeed is a hacky packy solution (Forgive me hackers brothers, i do not intend to cross the line). We are now in the world of objects, things have changed, newer platforms have emerged.

The diagram above speaks more of my scenarios, XML input and XML output using the good old REST Service approach. How am i suppose to translate these XML returned as string into object , how does WCF Channel Factories come into play with these. Keep focus, i will explain how i did the magic ...

First : Call the service from web browser , or get the schema of all the objects returned by the service. I chose the first option for my scenario as there was no schema, so what did i do, i called the web service from a browser , saved it as anything.xml , then i opened visual studio command prompt , ran the XSD.EXE command using the following parameters :

xsd anything.xml /outputdir:mydir



The command above, generates the schema from the xml which i saved from the browser called anything.xml. Now that i have my .xsd file handy, i can generate a .net class using the same xsd.exe command but with different parameters. Thats the price to pay.

Now i have anything.xsd in my output directory , i would like to generate .net classes using the follwoing command :

xsd /c anything.xsd

This generates C# classes for me. Now i am equiped with the object representations of the XML that the REST service sends to my consuming applications.

Second : How do i send request and recieve response from the REST Service ?

To do this, you would need to understand how to use low level WCF communication classes to send soap messages back and fort. First i want to have a mechanism that will enable me to serialize and de-serialize my soap messages Request messages : The following is an example request object :

Sample XML Request

<hellorest>
<message>Get Hello world </message>
</hellorest>

Now that we have carefully reviewed the generated c-sharp class we can proceed to creating the request object. Since we have the request xml above, we will have to hand code this as a c-sharp class. Using the good old System.Xml.Serialization namespace, we would decorate our request class with xml attributes for pure serialization. Here is our sample request object and a Serialization method :


[XmlRoot( "hellorest" )]
public class RESTWebRequest
{
[XmlElement( "Message" )]
public string Message
{
get;
set;
}

public RESTWebRequest( )
{

}

public XmlElement SoapSerialization( )
{
XmlSerializer serializer = new XmlSerializer( typeof( RESTWebRequest ) );
StringWriter writer = new StringWriter( );

serializer.Serialize( writer , this );

XmlDocument xmlDocument = new XmlDocument( );
xmlDocument.Load( new StringReader( writer.ToString( ) ) );

return xmlDocument.DocumentElement;
}
}


Having done that, we are ready to use the communication classes within WCF from the lower level to send our request object and receive our response as object too. To do this, we need to understand the following classes with the System.ServiceModel namespace :

1. Message : Wraps the request/response in a soap envelope.
2. BasicHttpBinding : Communication ensured via HTTP .
3. IRequestChannel
4. IChannelFactory

The listed interfaces and classes would be used to send and recieve , serialize and de-serialize soap messages. Now the long awaited class, the RestAgent which will be used to send and recieve messages is defined below :





internal class RESTAgent : IDisposable
{
IChannelFactory channel;

internal string EndPoint { get; set; }

internal RESTAgent( string EndPoint )
{
BasicHttpBinding basicHttpBinding = new BasicHttpBinding( );
basicHttpBinding.MaxReceivedMessageSize = 1000;
this.EndPoint = EndPoint;

channel = basicHttpBinding.BuildChannelFactory
(
new BindingParameterCollection( )
);

channel.Open( );
}

internal GeneratedFromXSD GetResponset( RESTWebRequest webRequest )
{
Message requestMessage = CreateIncomingMessage( webRequest );
IRequestChannel requestChannel = GetRequestChannel( );

requestChannel.Open( );

//Hey. time out here is ugly.
Message responseMessage = requestChannel.Request( requestMessage , new TimeSpan( 1 , 20 , 00 ) ); //1 hour

requestMessage.Close( );

GeneratedFromXSD anything = DeserializeXMLStream( GetMessageContent( responseMessage ) );

responseMessage.Close( );

return anything;
}

private GeneratedFromXSD DeserializeXMLStream( string xmlStream )
{
XmlSerializer serializer = new XmlSerializer( typeof( GeneratedFromXSD ) );

return ( GeneratedFromXSD ) serializer.Deserialize( new StringReader( xmlStream ) );
}

private string GetMessageContent( Message message )
{
XmlDictionaryReader xmlReader = message.GetReaderAtBodyContents( );

return xmlReader.ReadInnerXml( );
}

private System.ServiceModel.Channels.Message CreateIncomingMessage( RESTWebRequest request )
{
XmlNodeReader reader = new XmlNodeReader( request.SoapSerialization( ) );
return Message.CreateMessage( MessageVersion.Soap11 , "" , reader ); //Give us the real thing. Wrap up with soap envelope without a specific action = ""
}

private IRequestChannel GetRequestChannel()
{
return channel.CreateChannel( new EndpointAddress( EndPoint ) );
}

public void Dispose( )
{
channel.Close( );
}
}


You can use the RESTAgent class to send and recieve soap message from our Restful service by using the follwoing :



RESTWebRequest request = new RESTWebRequest();
request.Message = "Hello world";

RESTAgent agent = new RESTAgent( "http://localhost/restapi" ); //Parameter is the endpoint address

GeneratedFromXSD anything = agent. GetResponset( request );

Tuesday, November 17, 2009

The Story of the Spider Web Software (Part 1)

This article is a long story line of software projects, practices and cultures that contributed to the failure aspect of software development. The story of the Spider Web Software is all about how you and I have turned software engineering institution into a chaotic industry. This is not Blame, nor one of those cries, because you and I needs to accept the fact that we are not following due-diligence.

Not anymore do people , engineers , end-users and business owners whose daily interactions with software have diminished could say a good thing about that new solution. People are loosing their confidence, end-users are sticking to their manual processes instead of living with crappy solutions that we once persuaded them to use (Dictatorship, you must use it to be compliant with the world => this is a big lie).

Software industry is experiencing a vote of no confidence from all angles of the industry, just because of what ? because we built what we are asked to build => "The Spider Web Software" . If you do not build it their way, you are an enemy of state with all eyes on you, the only way out is to join the Open source community. From a disciplined industry into a free for all industry, Software industry will continue to suffer these pathetic problems mutilating the profession. I am ashamed to call myself a software enthusiast , i cry inside for several failed projects which could have been avoided at the initial stage.

Not every software is a piece of crap, not every pragmatic development is wasted; not every solution meets the users need. But at least, in the systems development front, there have been more successful software stories and many software have experienced growth from its initial version 0.0.0.1 to 0.0.x.2, at least a quantum leap. Get ready for the story, close your eyes, open them, close it again .... wala :

In the land of Spidery

Once upon a time (time - time), in the ancient town of spidery in faraway land, there lived a the very strong people of the land blessed with shelter, shields (anti-virus), clothing's and a massive web which connects and support the foundation of the land. For 1000 years, the people of this land have lived and conquered their challenges, all have worked in a team spirit fashion to the successful of their legacies.

Happy, these people were, until modernization came into the lime light, the stories of how cities were emerging reached the land of Spidery. The king summoned his cabinets, and they decided to follow the trends rather than taking the step at a time, they wanted to dive into modernization straight away.

It was a good idea says one cabinet member, at least in one day we will have trains (Bakerloo , waterloo), sky scrappers, parks and standard environment, that would make life easy. Do not get me wrong, the Spidery kingdom is very rich (Milk and honey flows on the land). The cabinets and the king all agreed, that it was time they change the kingdom in one day. News of the changes had reached the entire world. Even the Spidery Kingdom have a way of celebrating their successes even before the success.

Now, when the integration starts, many loop holes where discovered, and instead of going back to the drawing board to re-design the architecture of the new modern spidery land, they insist on going on. At a point the people of the kingdom saw the direction where modernization is taking them, they realized that everything that is glued together can easily fall apart if one of it fails, it was too late to change the designs to separation of concerns , the train line , city centers shops , bars , buses , electricity , houses , airports are all linked together with a very tightly coupled manner , that if one fails others would not run. Its was a bad moment for the people of spidery, the preferred to be in their olden and conservative lifestyles rather than leaping into modernization in one day , it was a day that they will never forget.



Wednesday, November 11, 2009

Google is Ready to "GO"

There is a new addition to the Object Object Oriented programming platform , codenamed GO (A new addition and descendant of the C family) by Google. It is said to take advantages of new trends in the software versus hardware community. Go will take advantage of dynamic languages like python, it is said to have an efficient Garbage collection, runtime speed close to C language and full support for multi-core processor from ground up.

Go is blessed with true closures and reflection (Programmers delight to extending the framework). The reasons behind the introduction of GO is that the older languages have failed to include current computing trends like fast programming, expressiveness (Functional), multi-processors, true closures . I personally love the idea of methods/functions returning multiple values. Although i struggle with the return type after the method parameter.

Is Go the future that we all want?

The problems claimed to be solved by Go is already taken care of by mature object oriented programming languages. Go should not be seen as a replacement for C# or Java etc but a language introduced specifically to solve specific computing problems. C# is still in its infants and we have experienced more powerful programming concepts and constructs that Go claims to be a silver bullet for.
Dynamic language integration is already a number one citizen of todays languages, Java has the Rhino (Integrates with JavaScript), and C# 4.0 is coming with the dynamic keyword to solve Com inter op problems and integration with more dynamic language. .NET is a platform that cannot just be thrown away, because it already blends with todays problems and it is constantly being updated to support features required to develop todays application.

What does programmers want?
We want a unified programming platform, we would like to see all object oriented programming languages to have very easy integration and it is not wise to introduce new language to address specific problems. If you are a system integrator and an advance developer, you would realize that bringing platforms together is more fury. We would love to see a bridge across platforms, enough of this chaos.

Will Go Make it?
well, it is still experimental and we are all allowed to contribute to it because its completely free. Go still have a lot of hurdles to pass through like every other new language, it will survive most of this hurdle but if it is focused on the right direction. I personally does not see it as a replacement for any of the more matured and advanced languages.
Share your views. and Go for Go here.

Monday, September 14, 2009

Software Development Can be Green Too. (Part 1)

The new wave in our industry "GREEN IT" , is rapidly becoming the IT buzz of the moment. Almost anything you do in computing nowadays needs to under go the green scrutiny. In other to cut cost and save our ecosystem. Recently i went on a computer gadget parade at Soho (down town London west-end), i noticed a gadget that i loved, asked for the price, the store assistant had to check records to confirm the new price is not different from the one on the sticker. I asked why they could not use an electronic price indicator, he answered : "We want to be green".

Most people, organization and professionals read meanings to the definition of "GREEN Computing" just like any other computing phrase and jargon, it can mean differently when used in different context. We need to exercise caution when we form our own theories around the "GREEN Computing" word, because wanting to be green should not mean you would disregard what is efficient over what is not. Because you need to be green does not mean you would not provide your customers, staffs with what will make life easier in the workplace and as a product.

Nowadays, every organizations (large and small) are going into green computing foray to utilize resources and energy. The green aspect is to ensure that, there is adequate resources and that project leads/managers/technical leads/architects/business analysts proactively utilizes what is left of them in a cost effective way. Green must ensure resources are maximized and reused accordingly during project development.

What exactly is "GREEN" Computing or IT

Green IT is a computer word that favors efficient production/use of computer resources without any dangerous impacts on our environment. It also mean cutting cost on what we can reuse (It favors Re usability over Recycle - New ). Most computing resources consume a great deal of energy and the cost run into millions to maintain yearly, if we try to be green we can save a lot on environment and cost.

Is That all. No! "GREEN IT" also means

We change our practices, we obey our ethical responsibilities to the society at large. We need to be green before encouraging it. (For example i am green about my spending habbit).

Cloud Computing will take us to the greener pastures

With the Internet becoming so powerful and broadband s are becoming cheaper everyday. It is now wise to take big advantage of the next revolution of our industry, the cloud. Although i can always argue that we are already in the cloud era, right from the 50s and early 60s when the US military used Internet to boost their military operations, and since then, the skies have become the limit, we have experienced the advancing Internet revolution, and within the same era, here come the cloud.

More development should be targeted at the cloud, their is a green advantage on the usage of software systems over the wire compared to conventional run from local system. This approach will reduce cost of installations, cost per head of users and of course the cost of maintaining a server will be taken of you and it will be the responsibilities of the cloud provider. In short cloud computing will take us to the greener pastures.

Now we need GREEN Software Development

Computer hardware does not exist alone, they are produced to complement and supplement the software written by us. You may start to think, that if hardware is useless without a software, that means the software itself consumes resources from the hardware, and the more resources a software consumes the higher the rate of electricity, network packets that the hardware will be forced to use.

To be greener in development, we should try to write code that can be reused across our application domain. We should try and employ domain driven engineering principles (Most organizations write the same code over again because they could not define their domain).

Software as an assets, if you define your domain well, and everything you need are in place and you have a catalogs of domain artifacts assets, this mean your organization is practicing green development.

Get experienced developers to influence the in-experience one, make awareness about the importance of modularizing your components. Use AOP (Aspect Oriented Programming) like you have never done before.

We need to embrace greener coding styles, which will include more focus on software performance, the exploitation of multi threaded and parallel computing. We need to retrain and ensure that software development standards are used to build software and not just delivering crappy software. If we want to be greener in software development approaches, then we have to imbibe the culture of using standards. This will take us back to the days scarce hardware resources.

Green Coding Standards

1. If you are building large tree structure of objects which requires recursion or the visitor pattern, then you will need to cache your output to by pass the same recursion when you require the same output. A slower system makes use of resources.

2. Use design patterns effortlessly. A pragmatic developer would appreciate that common and recurring problems can be solved using a library of knowledge. It is wise for software houses to document that knowledge.

3. Agile developers believes that TDD (Test Driven Development), when practiced to its fullest represents the full specification of a software. In fact TDD believes that tests are the documentation, I am also a strong believer of this. If you make use of scenario based TDD, you will understand that TDD can cover all aspects of your application. If TDD can represent your documentation, why write another bogus documentation wasting paper resources, and re-writing such documentation when something simple changes in own code.

4. Make use of Code Generators, ORM facilities, this will bring you very close to the market place. They are an existing strategies proven to be successful, and reduces code bloat. A good one is this : Rapid Entity Framework.

5. Use Open source : Open source will make you greener and there is no hidden cost the software is completely free to distribute. Open source projects are delivered on time and patches are made on time.

6. Ensure your program does not have bottlenecks. Ensure your code does not take long to achieve its aspect.

7. Be performance aware, an application that consumes resources alot consumes electricity, and so therefore such applications are not green.

We cannot help it, we only develop what works.

Many developers, especially those who are much more around in IT industry today, are more focused on getting faster to the market and during the process, they end up with a product that misses its deadline, a product that is fragile and breaks down every hour, an un-maintainable code base. I can hear some developers already throwing blames at their managers for not allowing them to follow industry standards before software is delivered. Well some managers do not understand (Make them understand and let them know its importance), some managers do understand but their superiors do not understand. These are the problems that IT is facing today.

If we keep focusing on hardware alone in other to be green, then we are half way in reaching the greener pastures. Software development can be green too.

Thursday, July 30, 2009

Generating PROXY from interfaces (Using Reflect Emit) : The flexible way to reduce code

Whats is a Proxy
A proxy is a class object, that acts as the original class type. Lets assume we have an interface ICustomer and we need to get implementations of this interface from different datasources (XML, flat file, soap object). Before we could assign data to the instance of this interface, we need a concrete type that implements the interface i.e XMLCustomer implements ICustomer . If what we need is to represents what is defined in the ICustomer interface, then we do not need the extra type XMLCustomer instead we will make a proxy from the interface, i can hear you say why, well, read on .....

I have been doing a lot of code reduction lately, in fact all my years as a software engineer, i have been seeking ways to make development a happy hour for myself and my peers. Tell you what, domain driven development is the future of wring software components that is fail prove, it is also the future that will bring us closer to the market place.

Interface is a very powerful programming construct which reduces dependencies between applications and components. With interface, your code is not far away from TDD (Test Driven Development) and you can test first before writing any code. So if we want to write a code that is targeted at the future, we need to focus on interface.

Much blabbing, how do i generate proxy from interface

Here, i will be revealing some powers of the Reflection.Emit namespace of the .NET framework. Using components/classes of this namespace, will allow you to programmatically examine types and creating code at runtime, what we are going to do in this section is runtime code injection and code generation.

Now back to our ICustomer interface, let us assume the class definition is as follows :


public interface ICustomer
{
string FirstName {get; set;}
string LastName {get; set;}
}


Yep, the example above is very simple, we will use this in our PROXY generation pattern. Did i just here you say where is the class that will implement this interface, nope is the answer, there is none for today because we are trying to avoid more code.

Now The PROXY Maker

We would like to do something like the following :


ICustomer customer = PROXY.CreateProxy();


yep, this is cool, we do not have an implementation class, but our proxy maker creates one for us behind the scene while examining the structure of the interface ICustomer.


public class PROXY
{
internal const string VIRTUAL_ASSEMBLY_NAME = "our.proxy.us";
const string version = "1.0.0.0";

static AssemblyName assemblyName;
static ModuleBuilder moduleBuilder;

static AssemblyBuilder assembly;
static AppDomain curAppDomain;
static IDictionary cache;

static PROXY()
{
assemblyName = new AssemblyName(VIRTUAL_ASSEMBLY_NAME);
assemblyName.Version = new Version(version);
curAppDomain = Thread.GetDomain();
assembly = curAppDomain.DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.Run);
moduleBuilder = assembly.DefineDynamicModule(VIRTUAL_ASSEMBLY_NAME);
cache = new Dictionary();
}

private static ModuleBuilder ModuleBuilder
{
get { return moduleBuilder; }
}

public static T CreateProxy()
{
if (!cache.ContainsKey(typeof(T).Name))
{
TypeBuilder proxy = PROXY.ModuleBuilder.DefineType(typeof(T).Name, TypeAttributes.Class | TypeAttributes.Public, typeof(Object), new[] { typeof(T) });
proxy = Emit(proxy);
Type type = proxy.CreateType();
cache.Add(typeof(T).Name, type);

return (T)Activator.CreateInstance(type);
}
return (T)Activator.CreateInstance(cache[typeof(T).Name]);
}

private static TypeBuilder Emit(TypeBuilder proxy)
{
foreach (PropertyInfo propertyInfo in typeof(T).GetProperties())
{
FieldBuilder field = proxy.DefineField(string.Concat("_", propertyInfo.Name), propertyInfo.PropertyType, FieldAttributes.Public);
PropertyBuilder propertyBuilder = proxy.DefineProperty(propertyInfo.Name, PropertyAttributes.HasDefault, propertyInfo.PropertyType, new[] { propertyInfo.PropertyType });

if (propertyInfo.CanWrite)
{
MethodBuilder setMethod = proxy.DefineMethod("set_" + propertyInfo.Name, MethodAttributes.Public | MethodAttributes.Virtual, CallingConventions.HasThis, null, new[] { propertyInfo.PropertyType });
GenerateSetMethodBody(field, setMethod.GetILGenerator());
propertyBuilder.SetSetMethod(setMethod);
}

if (propertyInfo.CanRead)
{
MethodBuilder getMethod = proxy.DefineMethod("get_" + propertyInfo.Name, MethodAttributes.Public | MethodAttributes.Virtual, CallingConventions.HasThis, propertyInfo.PropertyType, Type.EmptyTypes);
GenerateGetMethodBody(field, getMethod.GetILGenerator());
propertyBuilder.SetGetMethod(getMethod);
}
}

return proxy;
}

private static void GenerateSetMethodBody(FieldBuilder field, ILGenerator iLGenerator)
{
iLGenerator.Emit(OpCodes.Nop);
iLGenerator.Emit(OpCodes.Ldarg_0);
iLGenerator.Emit(OpCodes.Ldarg_1);
iLGenerator.Emit(OpCodes.Stfld, field);
iLGenerator.Emit(OpCodes.Ret);
}

private static void GenerateGetMethodBody(FieldBuilder field, ILGenerator iLGenerator)
{
iLGenerator.Emit(OpCodes.Nop);
iLGenerator.Emit(OpCodes.Ldarg_0);
iLGenerator.Emit(OpCodes.Ldfld, field);
iLGenerator.Emit(OpCodes.Ret);
}
}


Now the deed has be done. We have the proxy class above using the Reflection.Emit, to emit code at runtime for our interface properties. For each properties defined in the interface, a method body is created for it in the proxy class

Sunday, July 26, 2009

Chartered Me (The Hallmark of my commitment to IT)

I received the Chartered IT Professional Status award from The British Computer Society. The award is part of professional achievement that gives satisfaction in your career, and a re-assurance that you are doing the right thing and keeping yourself up-to date on current trends in your area of expertise. There are many of these awards, but the CITP award stand for the Gold standard for a true IT professional.

A professional without a clue of where he/she will belong in years to come is wasting time and awaiting to become obsolete which will lead to their inability to keep up with the pace in technological advancements. There are gains in career developments (Learning new things), and setting achievable goals yearly. There are gains in standing out from the crowd.

The IT industry have suffered from lack of professionalism in the past, this then lead to many failures in IT projects and over budget, because of poor project management, outdated software tools and platform, and of course, obsolete people working on the project. In this current economic climate condition, the last thing an organization would like to face is failing and over budget projects. Some projects do fail because most people are obsolete and they have been blinded by daily routines that they cannot see the advancement in the technology world and how these advancements will help shape their businesses. Most don't even understand their business.

There is now a chance for likely minded, to share their experiences and the pros and cons in using the following paradigms:
  1. project management,
  2. development standards,
  3. risk assessment,
  4. software quality assurance,
  5. software measurement and
  6. agile system.

An organization that is doing the right thing will have to bring together as a single component, the listed practices above (Although i agree that there are many more but for brevity we will mention just few).

Why Professional Membership is Good?
Like Accountants, Surveyor, Engineers, professional membership and status is an achievement that tells about your competence and your interest in developing your career. The best place to get the latest information in your profession will be through the membership of your profession. There are many professional bodies out there and The British Computer Society is just one of them which i am proud to be a member.

You have the opportunity to meet people of like minds. For example i am an Open source developer and also an advance software engineer my professional body will present me with people of like minds that we can discuss and help ourselves in emerging trends in technology. This will peer you with people around the world and you will be the first to know the latest gist in town.

Continuing Professional Development (CPD)
Being a member of a professional body will enable you to learn from experience. You will tailor your career progression with what new thing you have to learn. In this fast paced technological driven world continuous development should be an attribute that enable you become a respected and true IT professional. Then you can measure yourself/skill sets against an SFIA level (The Skill Framework For Information Age). This will tell you were you are at the moment and areas you have to work on to get to the next level.

It will enable you and your organization to see beyond your company alone and keep you at the edge in this competitive market.

Is IT so polluted that we need a professional body?
well, some section of IT is overly polluted that we need measures in our industry to stop these pollution. Just like a doctor, you cannot be a newly graduate doctor and be given a chance to perform surgery, you need to be validated in so ways (Medical schools and year of experience) before you are given a chance.

In IT anybody can write a piece of program so far it works but not everybody will write a program that will conform to best practices etc. Anybody can manage a project because they feel project management is just overseeing (This is wrong). An IT project manager should have been an enthusiast software developer that will enable such person to strike a balance between delivering quality products and conforming to standards in our industry.

Wednesday, June 3, 2009

Going to the Clouds with windows AZURE. (A platform for cloud computing)

Access to software, services and applications over the Internet, with/without you having an idea of the original location, such services is referred to be living in the cloud. Users need not bother about where the software is located.

With cloud computing, you do not have to worry about the technologies that supports your virtual environment. You don't have to wait too hard for your service providers who have to setup/install and configure components for your server environment before using cloud computing, all the components that you require for smooth running of your cloud business, are already in the cloud.

Cloud computing is mixture of other existing buzzes in the INTERNET jargon, these are :

1. Iaas : Infrastructure as a service
2. Paas : Platform as a service
3. Saas : Software as a service

These existing services are incorporated together to form what we refer to the cloud. They all are functionally Dependant on one and other to give a breath taking user experience.

Is cloud computing the new IT Jargon?
In today's IT complexity driven world, reliance or dependencies on technology resources are becoming cumbersome for large and small enterprises, while maintaining large infrastructures of computing resources are becoming an cost issues to organisations, there is a newly born term (Cloud computing) to wash away our worries. Doing business on the web is not as easy as it seem when you have to worry about all the infrastructures, security, maintenance and continuous up gradation of these infrastructures, it would be nice when several providers take charge of the technology backing and the rest is for you to leverage an existing API's/Frameworks to complement your business.

The cloud buzz is already attracting bigger IT providers like Google, IBM, Sun, Microsoft. What does all these mean to us all, the future is in the cloud (with the exception of propriety based technologies).

Microsoft answered with Windows AZURE
Windows Azure is Microsoft first step into the cloud computing foray. Azure is a Microsoft initiative in providing services on the cloud. This is an operating system that is used for deploying azure services on Microsoft owns data centers.

In our day to day deployment/development experience, we overcome several challenges in application configurations, server prerequisites, and configurations.

The windows Azure is an assurance that all these server components (MS-MQ, WCF Host, WWF host, SOAP, XML, REST, ASP.NET etc) will be available without you worrying.

The good thing is that Microsoft is providing SDK's for windows azure, and there is already a Visual Studio integration. Although, Microsoft technologies will be supported (As these are core to the heart of the software giant) , windows Azure will also host non-microsofts technologies. (infact there is already an effort on codeplex for PHP Azure .


The Azurance Provided by windows Azure
  1. Provisions for Binary large object (Blob) tables hosted in the cloud.
  2. Full .NET Framework support.
  3. Available security polices.
  4. Support for other platforms languages (PHP)
  5. Message queuing.
  6. SDk's available everywhere
To azure yourself on the next IT jargon using Microsoft platforms and technologies you will need to go here http://www.microsoft.com/azure/default.mspx. Do not panic, because your development will still be the same, and you can continue to use your existing applications. Infact, your applications can be made azure compliance.

Good thing for .NET developer
Microsoft is not forgetting us and we are already cloud compliant as Microsoft, already provides an SDK for integration into Visual Studio development environment. In fact, our existing developments can be converted to azure services (Its just a matter of configurations). Download the Visual Studio Tools for windows Azure, and reassure yourself of the future of computing.

If you wait on this channel, i would be providing a simple .NET application that is windows Azure compliant.

Hating Try Catch Block (Successful way of cleaning code)

The title of this post is a bit weird, it is weird in the sense that it promotes hatred for try catch blocks in our code. I am not a try and catch block hatter, but too much of it will lead to code rots and very, very buggy software.

Did i just say buggy, not only will it make it difficult to maintain but very inexperienced.

In our grained experience in developing object oriented components/systems, we have met with several challenges that tells us that OO (Object Orientation) is not complete and it is not a silver bullet and answer for all. Well for the OO enthusiastic, permit me to say OO has its own bad practices and so therefore it is not complete in its own rigth (No wonder we have several design patterns all over).

Anyway, the purpose of this post is not about OO not beign complete particularly, but about a practices that can lead to very hard to read code, a difficult and un-maintainable codebase. The following contains short list of code demons :
  • Too much IF ELSE statements : This is a killer code beast that if not properly used, you end up with a code that no one will understand.
  • Separation of concern : The statement 'Separation of Concern' does not apply to components/libraries/.dlls alone. It also applies to classes, methods, properties, interfaces. Any code elements is there for a purpose (Concern). If you write a class and you have several methods within the class (even utility methods like GetDate()) , your class does not have a concern. This allows your library, componets, dlls not to conform to concerns from class levels. I will rather create a class that does one thing and do it very well, instead of creating one class that does many things. A dog is a dog and not dogcat. If your class is a dog, let it be a dog, if its a cat let it be a cat.
  • The evil of all TRY CATCH BLOCKS : Yeap too much of this leads to code rot and it does not encourage readability of codebase. Since this is the subject of this article, i will start in a separate paragraph.
The Problem Statement
Consider the following code snippets.


public delegate void SafeMethodCall();

public void DoSomeBadThing()
{
try
{
//Do bad thing
Thrower();

try
{
//Really Do some wacky thing
Thrower();
}
catch (Exception ex)
{

}
}
catch (Exception ex)
{

}
}

public void Thrower()
{
throw new InvalidOperationException("Exception message");
}


Some people will not see any bad in the above code, let us assume we are doing the same in several places. This will lead to very messy code base, i rather allow the application to fail and log the error than catching different exceptions in several places. The only way out of this situation is maintaining at least one try catch block at the application entry points. That is if you have a class that kick starts your entire application, or its an abstraction of a framework, this is the most suitable place to hook the try and catch.

An efficient approach :


public delegate void SafeMethodCall();

public void Thrower()
{
throw new InvalidOperationException("Exception message");
}

public void SomethingNeat()
{
CallSafely(Thrower);
}

public void CallSafely(SafeMethodCall methodCall)
{
try
{
methodCall.Invoke();
}
catch (Exception x)
{

}
}


The delegate can be used by several other methods to do safe call. This means your try and catch is maintained in a single location. This approach is also efficient for logging, if you are logging errors to file or different repositories.

Now, how does this approach save cleanse our code base

Let us assume we have three different parameterless methods (because our delegate is parameterless) :

public void GetName(); //Can throw string format exception
public void GetAge(); // Can throw invalid cast exception
public void GetCountry(); //Can throw country not in world map exception

These methods have a potential of throwing these errors, how can we call these methods using the above approach, instead of defining try and catch everywhere in our code base. we will now do like so :

CallSafely(GetName);
CallSafely(GetAge);
CallSafely(GetName);

For me, the following is more cleaner, than having many try catch handlers to handle the exceptional situations.

Let me know what you think.

Friday, April 10, 2009

Building quality and bug free software : Leverage Design By Contract

Since inception, the advent of Object orientation development gave birth to chains of concepts in recent years, and all these had positive impacts on the ways we develop robust, reliable and scalable systems that passes the test of time. One of this concept is designing software component and allowing components interaction based on contracts. These contracts are the full agreements that each components must satisfy before the system can run successfully.

A simple analogy is smashing a bottle on the floor, what do we expect? of course, we'd expect the bottle to break, and scattered all over the floor. The fact that the action smashing a bottle on the floor requires a bottle (Pre-conditions) and bottle must scatter on the floor (Post- condition). The pre-conditions and post-conditions are the contract agreement in our analogy. Now lets dig into what really is design by contract:

What is Design By Contract (DBC)

Design by contract is the ability of software components to satisfy obligations that are required for the smooth running of a software systems. Was introduced into the Eiffel programming language designed by Bertrand Meyer in 1988.


Now, the word design by contract is used to associate software contracts/specifications with the software development itself, by so doing, we described what the software component expects from calling component or client. This approach makes the development of software and its specification to be interwoven and changed when either changes.

Again What is Design By Contract

Design by contract is a software development standards that enforces a pre-conditions and post-conditions on the ways software components/ methods and libraries communicate with one another. An object oriented class should depict what it represents. If you have a class called AccountValidator, the job of that class is to validate an Account not to withdraw from that account, another class or service will handle the withdraw aspect.

To validate an account, we will Require an Account Details object, and the algorithms for validating an account, and also Ensure that the account is valid, and there is no invalid data supplied. Note the two words Require and Ensure , these are analougous to the pre-conditions and post-conditions that we have been refering to. So if you are developing the AccountValidator class, the methods should require that parameters meets its demand and ensure that valid validation response is sent to the consumer.

Why all this, what about TDD (Test Driven Development)

Yeap, TDD allows you to pre-verify your code against certain conditions too. And if you have been using Assert statements to verify outputs from your test, you are near design by contract. But DBC allows you to have the assertions into your code not outside your code. Your methods should be strict enough and only bound to its part of the contract.

All talk and no code : Tell me more

Now lets consider the following class AccountValidator, and its contract obligations :


interface IAccountValidator
{
bool ValidateAccount(IAccountDetails account);
}


//IAccountValidator Implementation

public class AccountValidator : IAccountValidator
{
public bool ValidateAccount(IAccountDetails account)
{
if (ValidateAccountNumber(account.AccountNumber))
{
if (ValidateNameOnAccount(account.NameOfAccount))
{
//Now ensure that the value is in the database

if(ValueIsInDataBase(account))
return true;
}
}

return false;
}

private bool ValidateAccountNumber(string accountNumber)
{
if (string.IsNullOrEmpty(accountNumber))
throw new InvalidOperationException("Invalid Account Number");

if (accountNumber.Length != 8)
throw new InvalidOperationException("Account number must be exactly 8 characters long");

return true;
}

private bool ValidateNameOnAccount(string name)
{
if (string.IsNullOrEmpty(name))
throw new InvalidOperationException("Invalid Account Name");

return true;
}

private bool ValueIsInDataBase(IAccountDetails account)
{
//Database calls here
}
}




The class above specifies to pre-conditions, these are valid account number and valid account name. The post-conditions of this class would have been checking that the actual data submitted is valid in the database. This class is just a simple and classic design by contract example, more advanced frameworks has been produced using aspect weaving or aspect oriented programming to solve the contract issues.

DBC is comming in .NET 4.0

Code contracts will be part of the Rosairo project (VS 2010). This will allow you to have the ability of static time checking of contract violations, API documentation, improve your testability.

Sunday, March 29, 2009

Democratizing Software Development (A Promise from Rosairo) .NET 4.0

Microsoft is making a big change in its .NET suites of technologies. The Visual studio development environment will experience additional functionalities that will make life a sweet one for all of us. An application server code named "Dublin" will be introduced to compliment IIS (Internet Information Service) . WF and WCF will experience another quantum changes, and we will all be happy.

The .NET framework is not left behind, as we will see more API changes like : dynamic (supports for COM interop and dynamic languages), Named parameters, default parameters, BigIntegers, covariance and contra variance of generic list : as IList string will now be equals to IList object , also Code contracts.


Democratizing Software Development Life Cycle

There is now a big quantum leap from programming focused development environment to software life cycle development environment.

In an Agile driven development environment, the key practices there is collaboration, software developers needs to collaborate with architects, testers, project managers and database administrators. This collaborative means ensures that products are delivered on time and risks are noted on time. The collaboration amongst team of professionals already in an agile environment leveraging the .NET platform will be a plus when the new Visual Studio 2010 finally ships.

Use case, activity, architectural diagrams are other integrated features of Rosairo, developers, architects will enjoy new ways of architectural designs because all of this forms parts of the new change in VS 2010.

Ever thought about using test data to validate the efficacy of software solution. There is an added test tool that will ensure a proper scenario based documentation which is useful to application testers.

Black box testing recorder will be integrated into the Visual studio 2010, where testers can interrogate the call stack and record debug sessions for replay later. This gives the developers to watch the replay of the black box when a bug is found.

Let us fold our arms and experience the new changes to the developers number ones platform. Derio to Microsoft, Visual Studio Bomayee!!!

Tuesday, March 24, 2009

Time for BDD (Behavioural Driven Development)

A deep thought over why software project really fails or was built without actual concentration on the business cases will lead to the fact that the actual requirement was not being captured well, no feasibility study of any kind was taken to check the efficacy of the business case. We understand that Business Analyst, Quality Assurance and of course developers need a collaborative efforts to building software. Do we as developer really test first before writing the code. The test first approach makes for understandability of business requirements from coding and it makes us concentrate on what our requirements really are.

How YAGNI (You Aren't Gonna Need It) can help developers

YAGNI can help us to focus on our development needs, it will allow us to focus on the business/technical cases and reduces large complicated code referred to as code bloat. You really aint gonna need that sophisticated features if its just not helping the software requirements at the moment. I know we developers likes to write the best standardize code that meets with the current practices in our industry, that's cool but are we conscious of the time spent on the sophistication, do we put in mind that business requirements comes first before satisfying our hunger for fine grained development practices.


BDD complements TDD
The current methodologies in software houses is the Agile Methodology, whether it is being practiced in a full fledged manner or it is being tailored against your organization requirements, Agile methodologies favors incremental development over classic water fall approach. Agile methodology was referred to as cowboy coding because of its minimal planning approach, but it has proven to be successful in today's deliverable focused environments.

Agile methodologies encourages Test driven Development which in turn allows us as a programmer/developer/architect to focus more on testability of our code. At a point during the writing of large and complex systems, we loose our confidence in our code if we do not have an automated testing framework that gives us the confidence and boost our moral.

That said and done what is BDD?

Have you ever written a complex system with several many business scenarios. The reasons why systems fails is that we do not fully test all different scenarios that the system can find it self. An agile test driven developer should understand that testing scenarios goes beyond your test and mock passed, but its that your code/system meets the full capabilities and conform to clients needs.

It promotes common domain language between what the business analysts, quality assurance and the developers understand. Developers can save the technical detail when interacting with none technical people. This will enable everyone speaking the same language and everyone having common understand of the business case and where we are at.

A simple non BDD interactions

Business Manager : I want the new software to support staff tracking
Systems Analyst : It will results to us obtaining a tracking GPRS systems and API built around it.

Developers : We need to use a wrapper or/and abstraction, that will be make us not tight with third party tools. Yes it seems there is already a bluethoot API in the corner.

Business Manager : The goal of the system is to allow us know the time which the staff logs onto the systems, the idle time of the keyboard, and internet usage.

Systems Analyst : Ok i see, the provide us with your initial thougths and we can write a behavioural code that will clearify your thoughts.

The interaction above is just an example our understandibility is a neccessity in software development. You will not blame us developers for having a higher level mindsets its our job but we need to understand the business case, in other words we have to interrogate our code and simulate the business cases to get the clearer picture.

Note even the business managers do not know what they fully want, but the BDD will document and give a clearer picture of what the system should do.

Hence we need a user story

Recently i migrated a very complex business scenarios from code into WWF (Windows Workflow Foundation) . This would not have been possible if i do not know the user story. Although, i have to admit that i delt with a document which does not clearly structure the requirements in other of user stories or scenarios, all the same i picked up my own pen, and i wrote an interesting user stories from such document. Here is a simple example :

As a user of the online banking system
I want to loging with with my secure credentials
Then check my account details to see my balance record.

With This scenario, => check my account details
Given , the logging page is active and ready to take my details
And, suplying my real encrypted password
And, my real user name
when, i log in my details
Then, i was redirected to a new bank customer account page
And, I checked my account details.


If systems developers can write out a story out of the big and large software requirements documents, then the remaining code will be less painless because you would not be coding and asking about the story because you already know that the story line is happily ever after.

Watch out for when i display my experience with various BDD frameworks like Nbehave, Dan Norths Blog site, at http://dannorth.net/whats-in-a-story.

Friday, March 20, 2009

Simple dependency Injector class

It is apparent in todays application development that complexity starts from when we typed the first line of code. Code complexity is one of the code horrors mutilating todays application development. To ensure that we work around these developments show stoppers, many design patterns and software development standards have been built to help come over this poor development approach.

Today we develop software with components in mind, we develop so that we can easily decouple our systems and make it a pluggable system.

Systems are very difficult to decouple when we do not use a strategic and standardized ways of separating component concerns. Over the years, the development and Object Oriented community have realized that there was a need to separate application concerns because of changing business cases and requirements.

As a software developer i have made it a culture to think of software as several functional modules that are independent from one and other but dependent on one and other via a plugable means.

The following code shows an example of code coupling :


public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}

Person person = new Person();


There is nothing wrong with the code above really for a simple application. But for a very complex application, we tend to tight couple the Person class with the calling application. Let us assume the person class was imported from a web service proxy, that means our application is relying on the web service proxy for the Person Implementation. What then happens when we are not interested in web service proxy again but in another .dll that has its own Person implementation but with some extra fields and properties, do we start to refactor our code to comply with the new Person class? I bet that is not the easiest way ever.

Let us assume that person class now implements an interface IPerson. The following code depicts the kind of defination :


interface IPerson
{
string FirstName { get; set; }
string LastName { get; set; }
}

public class Person : IPerson
{
public string FirstName { get; set; }
public string LastName { get; set; }
}


Hmm, now that person have implemented an interface IPerson, then we could refer to IPerson all over our code, this is fairly decoupled until the very point where we really initialised the Person class, like the following :



IPerson person = new Person();

person.FirstName = "Ahmed";
person.LastName = "Salako";


now although we ahve succesfully initialised the Person class and because it implements the IPerson interface, we can assign it to the IPerson interface. Our code is still exposed to the point where we did new Person.


A simple Dependency Injector class

The following class will serve as a repository that knows about dependencies of our application :



public static class ModelFactory
{
private static IDictionary <Type, Type> repos = new Dictionary <Type, Type>();

static ModelFactory()
{
repos.Add(typeof(IPerson), typeof(Person));
}

public static T CreateInstance < T>()
{
Type value = repos[typeof(T)] as Type;
return (T)Activator.CreateInstance(value);
}

public static T AsInterface < T>(Type type)
{
Type value = repos.Where(t => t.Value == type).FirstOrDefault().Key;

return (T)Activator.CreateInstance(value);
}
}


So with the class above, we have successfully created a dependency repository class that knows about an interface and its implementations. Here is how to use the simple dependency repository class :



IPerson person = ModelFactory.CreateInstance < IPerson>();
person.FirstName = "Ahmed";
person.LastName = "Salako";



This leaves us very happy and with code decoupling.

Tuesday, January 6, 2009

LINQ And Interpreter Pattern : A tale of datasources

In a service oriented (SOA) environment, larger systems communicate/handshake with one and other with different varieties of data and formats.

These systems leverage different Data formats, contracts and pure XML (The RESTFULL way) hosted in a multi - platform environment.

Recently, in the real life, i came across a situation that i needed to fetch information from different data sources. The consuming application(s) does not want to know where these data's are being imported from and the producing application does not want to let go of the secret of the source to data. All it is concerned about is that the format must be consistent wherever it may be coming from.

I am supposed to fetch data from the following sources :

1. XML stored in a file system.
2. DataSet from another WebService
3. Service contracts from another service.

At first, this seems to be a mundane tasks, because you may have to create different code for different datasources and different translation for those datasources because an xml data is not the same as object data until we have a translator, we would not get anywhere near what we want to achieve.

First thing that comes to my mind was to leverage the Linq API and architecture to archive the above problem, how do i program my WCF service so that it will be extensible to other datasources, how do i ensure that the translation is not ambiguous (hence finding a better approach and framework that is consistent enough to do the job of translating data coming from different sources.

The Proposed architecture

I dig my hands into the dirty chests of patterns, i traveled from Gang of four to Microsoft pattern, i moved from different community, but there seem not to be a proper way of ensuring a consistent data translation strategy. After sometime, of digging, i eventually choose to use the interpreter pattern (Since it is used to interpret sentences in language elements). The reason for this choice is because i need a consistent language lexical structure that i can use for different data sources (as explained above) and Interpreter pattern seem to be the best choice to define my language grammar.


In this example, we want to interpret a Customer detail data structure coming from different sources, here is the C# structure of our Customer Poco class.



public class Customer
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
public int Age { get; set; }
}


We want to be able to translate several data sources to the Customer class defined above. This example will focus on the following data sources :

1. XML
2. Database
3. DataSet
4. object

If we are familiar with the Linq API, we would know that it supports for language elements across the named data sources above.

Interpreter Pattern Structure

The base interpreter is the interface to which we will use to interpret our language elements, because we need not know about the child classes or how they do their interpretations. The code snippet below is our base Interpreter.


public abstract class CustomerExpression
{
public Customer Customer { get; set; }
public abstract void Interpret();
}


The code above defines the CustomerExpression class and this class will define the structure of its child classes, the following child classes will be created :

  1. CustomerXMLExpression
  2. CustomerSQLExpression
  3. CustomerDataSetExpression
  4. CustomerObjectExpression
I will be discussing the first one only which is the CustomerXMLExpression. The code below is the snippet for the CustomerXMLExpression :



public class CustomerXMLExpression : CustomerExpression
{
public override void Interpret()
{
Customer = TranslateCustomer();
}

private Customer TranslateCustomer()
{
StringBuilder customerXML = new StringBuilder("XML Data");
XDocument customers = XDocument.Parse(customerXML.ToString());

return (from customer in customers.Descendants("Customers")
select new Customer
{
FirstName = customer.Element("FirstName").Value,
LastName = customer.Element("LastName").Value,
Age = (int) int.Parse(customer.Element("Age").Value),
Address = customer.Element("Address").Value,
}).FirstOrDefault();
}
}

We can use the same idea for the rest of the remaining data sources, for example we can search each data sources as follows :



CustomerExpression expression = new CustomerXMLExpression();
Customer customer = expression.Interpret();

CustomerExpression expression = new CustomerSQLExpression();
Customer customer = expression.Interpret();

CustomerExpression expression = new CustomerDataSetExpression();
Customer customer = expression.Interpret();

CustomerExpression expression = new CustomerObjectExpression();
Customer customer = expression.Interpret();



There is a long way you can go with this practise because it makes you to centralise your code and hides several dirty works away from developers.