Thursday, November 27, 2008

Enum can now have method

Thanks to Extension method.

Have you ever programmed against enum in the .NEt framework. Have you realized that the .NEt enum is not as efficient as other enum living in other platforms like the Java Land. Still the .NET enum is suffering from advance ways of using constants. An .NET enum cannot even have a method (Thats how bad it is) this is referred to constants specific method in Java.

But with the method extension in the .NET framework 3.5, we can pro grammatically extend an enum with methods that makes us think that they are actually defined within the enum scope.


public enum MyEnum
{
Name,
Age,
}

public static string RealName(this enum myEnum)
{
return Enum.GetName(typeof(MyName), myEnum);
}

MyEnum enn = MyEnum.Name,

enn.RealName();

Wednesday, November 26, 2008

Pragmatic Approach To development. (Developing for Today and Tomorrow)

Recently i have been engrossed in a project that will contribute to the way we develop system rapidly. This project is an ORM framework that maps the impedance mismatch between the relational world and the object oriented world. I must confess its one of those projects that does not have a finished date, its a project that will continuously apply the current trend in our no destination journey of system development. What i meant by no destination is that software evolves overtime and there is no promise land, is either we re-invent the wheel and make it a better wheel or we introduce a new form of practices or complexity.

I started on this project out of frustration because designing data access frameworks per the project isn't a good development practice this can lead to inconsistent development methodologies and framework will be prone to errors. In my software development lifetime i have worked on several real world projects in their hundreds, and i understand fully well what fits well in what area. Dont get me wrong, i do mistakes too, but i accept these mistakes immediately and i make it my strength. So, over the years, i have struggled to find a long lasting solutions to poor and repeated programming approach.

Develop For Tomorrow

One of the greatest software practice i have learn't is not letting requirement to control the software but letting the software development process think ahead of what the requirement maybe in the nearer future. This is a proactive approach to developing a scalable and reliable applications that will cut across different environment and requirements.

Most of us, developers allow software requirements to drive and predict the architecture of our software for us, so when requirement changes, we need to change heavy chunk of our code. To me thats a very bad approach and it could be very costly to organisations that are practicing such approach. Where is separation of concern, where is pragmatic development.


Requirement Volatility does not mean software volatility

Feasibility checks should be done on requirements before the changes can be made. Even without full requirement, software should be built with adaptability and reconfigurability in mind, this will ensure that the volatility nature of requirements does not affect system development.


Separate as many concern as you like

Recently, in Microsoft (MSDN) Forum, somebody asked a question about separating the a UI framework or layer from a data layer, many suggestions where given to this question, some say use DTO (Data transport Object) some say use interface, the bottom line is use something that wont have adverse effect on your codebase even when the underlying structure changes, be very pragmatic and scrutinize the kind of changes that are anti pattern, we are not in the procedural programming ERA, we are developing in the world of Objects (Think in objects).

Beware of hard-coding
One of the major setback of good programming practices is hard coding. When developers hard code, it shows how vulnerable our codes can be, and how it is difficult to maitain and sustain. I wonder why we will leave a pragmatic approach to development (Even if it takes time, lets do it right). So never hard code, instead create constant file or xml or use enum.

Tuesday, October 21, 2008

Rapid Entity Framework: A new persistent framework

The Object Oriented Paradigm is shifting massively because coupling relational semantics into our fine grained object oriented programming designs is becoming unpopular in todays enterprise needs and domain engineering. Because of this reasons and many more, object oriented platforms is already taking a big quantum leap since the impedance mismatches between OOP and Relational has made programming against relational database an Herculean task for most development efforts.

The industry is filled with many framework's that contribute to this change, and "Rapid Entity Framework" is one out of many that will solve most of the mismatches that we face in the persistence industry today. Rapid Entity Framework is an open source framework that was discovered by a motivation to really solve the divide problem between relational DB and object oriented concepts.

With rapid entity framework, you can think about your business domain rather than columns and rows. Its a wise design efforts to really segregate applications according to their duties, "what they do", this segregation methodologies applied by industry today, should be applied to database and object (Because they are different concept). A database is data centric, and should only contain data and any data manipulation semantics. While on the other hand, an object oriented architecture see object as real life artifacts.

Visit Rapid Entity Here

Friday, October 10, 2008

How to do the laziest loading in C# and SQL Server (Part 1)

Mapping POCO (Plain Old CLR Object) to relational database could be a bit messy and a degradation of our application performance in terms of relational loading. There are several types of relationships in the database which we try to simulate with the object oriented language, these relationships are as follows :

  1. One - To - One (one row to one row)
  2. One - To Many (one row to many rows)
  3. Many - To -One (many rows to one row. Inverse of the One - To Many)
  4. Many - To - Many (Many rows to Many rows).
When we use the above in OOP, we come up with Aggregation, inheritance, composition, containment etc. But the way the relational data is structured and stored is different from the way data is stored in the OOP world. Data are stored on the disk in the relational world while data are stored on the stack/and heap in the CLR (Common Runtime Language).

So, to maximize the usage of data when we are mapping relational to object oriented, we need to use a pattern called "Lazy Loading".

What is Lazy Loading.
Simply, lazy loading can be described as an approach for loading objects or data when we need them. This helps to save CPU cycles because we do not intend to load what we do not need, thereby we free our heap from unessary usage, and the almighty deallocator, the Gargbage collector can go to sleep. Simple lazy loading example can be described as follows :


public IList Customers
{
get{
if(null == customer)
customers = LoadCustomers();

return customers;
}
}


From the above code snippet, we are checking for null values and if lists of customers is not null, we load the already loaded lists of customers; else we load new ones. This is a classic lazy loading feature. This kind of lazy loading will still load all customers, except a stub that refers to the list that contains the data is returned. But in the case of the classic lazy-loading, we are loading everything and returning everything. (Thats Bad, isn't it).

The Concept of Laziest Loading

Let us assume the following relationships between two objects, Customer has many Orders . The relationships that we described above is a One - To - Many scenairo, One customer to many orders, the following class diagram and code depicts the relationship :

The relationships in the diagram says one instance on a customer class will have multiple instances of an order in its relationship bag. And one instance of an order, will have one instance of a customer class. Thats it.

The Perfect Solution

In the classic lazy loading scenario, we are loading everything once it is required, even if we do not intend to use everything in the list, we end up fetching objects into the list which they will all be allocated memory space on the heap. unwanted objects made its way to the CLR heap, thats not what we bargained for.

What if we create our own special list and a class that implements IEnumerator for a special case of doing the laziest loading. The following diagram depicts our new concepts of laziest loading of data structure using our custom built list (implements from IList) and custom iterator, inplements IEnumerator : The diagram below represents the concept of the laziest loading and relational to object transformation, using datareader and our custom enumerator class.
Looking at the diagram, we will realize that there are three sections, the Custom enumeration section, the Data Reader, and the the relationship sections. This sections mapped to themselves properly to achieve the laziest loading i have been talking about.

Datareader mapping with Custom Enumerator
When you create a custom enumerator, you will implement the IEnumerator interface, which will give you various method and properties to override, but in this article, i will be talking about three methods that are very useful in our lazy loading scenario. The idea is to lazily load objects from the database when they are iterated instead of when the whole list is called. For that to happen, we need an open data reader that we need to read when the GetEnumerator method of the list that use's this Enumerator is called.

Enumerator MoveNext Mapped to DataReader Read

When there is a call to our List (Probably you need to create a custom list that implements IList, and returns our custom Enumerator). When in a froeach loop, an iterator can know if it has data to iterate with the call to movenext method. And since we are fetching from the database, we need to call the Read method of our DataReader here, this will ensure that our dataReader moves it cursor to the next row in the list. This can also return false to indicate that there is no more row to read.


public bool MoveNext()
{
return dataReader.Read();
}


Current and DataReader Get[Type]
Enumerator Current property used to translate DataReader Get[DataType] to our domain or entity object. For example, let us look at the code snippet below :


public object Current
{
get
{
ProductOrder order = new ProductOrder();
order.OrderDate = Convert.ToDateTime(dataReader[1]);
order.OrderName = Convert.ToString(dataReader[2]);

return order;
}
}


Enumerator Dispose Method (If you implement IDisposable)

The disposed method is fired when you exit the foreach loop, or you read the list to the end, or you break from the list. It is a wise idea to close the datareader in the dispose section, and before you close the datareader, it is wise to keep moving the datareader till it gets to its last (In case the loop was exited with break statement). The connection shouldnt be close here, because you may be using it for other database operations (Especially if you are using MARS (Multiple Active Result Sets) new feature in "SQL 2005 YUKON". The following code snippet, depicts the kind of stuffs that you can do with the dispose method :


public void Dispose()
{
while(dataReader.Read())
{
//Do not do anything again here. This is called when you exit the loop
//By end of loop or break statement.
}

dataReader.Close();
}


This is just the beginning, so watch out for part two when we create lazy loading handlers delegates and when i introduce you all to Ghosting or Dynamic Proxy in C# using IL (Intermediate Language). Thanks.

You can now find the Dynamic Proxy Part 1 here

Monday, October 6, 2008

Validating Domain Objects

Introducing Domain Engineering :
All OOP Languages allow users to create reusable components that cut across different domains in any business context. These components becomes an reusable asset to the practicing organization. It is called cross cutting concerns and also referred to the art of grouping objects/data's into families of system (Domain Engineering). The system families are distributed across application domains (With common variabilities and commonalities) within the same organization and/or entire world. The paradigm shift from application engineering to domain engineering gives us the gains of building reusable families of systems.

What are family of system's?
A system family is a library of components that are easily configured with other systems. In .NET a system family could be an .NET assemblies, and in JAVA it is the jar files which contains domain artifacts. When building domain objects we need to ensure that they represent what they are so as not to over engineer a system that will later on back-fire on our investments.

Let us assume the following scenario of a banking system : We have Customer, Account, Statements etc.

If we need to implement the above banking domain issues in C#, we will be modeling them each as C# POCO (Plain Old CLR Object) objects. The following diagrams forms what we would be implementing as our domain artifacts :

You can see the relationships that these object holds to themselves. An Customer object has many account. An customer object also have many statements to one account. This is a simple design of a banking domain system. If we are to create these classes, and add them in a single assembly (JAR file in Java), then, they become reusable across application domains and interfaces.


Mapping domain objects to relational database

The advent of the Object Oriented Relational Mapping has finally come, when the bigger guys in the OOP language community already have the framework interwoven with their runtime systmes. The two main OOP player of today are the .NET framework and the Java Runtime. The .NET framework has introduced the Entity Framework, while the Java people introduced java persistent framework (JPA). The mismatch between relational database and object orientation will soon be over.

When we map components or domain object to a relational database table, we are abstracting the relational semantics away from our code, which is a better thing to do. The following is the Customer code of the banking domain object in the above diagram.

Note the Customer class in the code below inherits from the BaseModel class, which is an abstract class that marks that our domain object can be validated. Note in your implementation, you can use an interface.

public abstract class BaseModel
{
public abstract bool CanValidate { get;}
}


public class Customer : BaseModel
{
private List <> statements = new List();

public List <> Statements
{
get { return statements; }
set { statements = value; }
}
private List <> accounts = new List();

public List <> Accounts
{
get { return accounts; }
set { accounts = value; }
}
}

The Domain Validation Rule Class
Since our domain objects are our relational table representation, we need to ensure that there is a valid object oriented validation before the state of our domain object hits the back end. What is a validation rule. In this example, domain rules are created as .NET attributes, and they will be used on properties of domain objects. For example of some domain rule, here are two types of rules :

Max Length Rule This rules checks the maximum length of a data in a property of a domain object. It inherits from the CustomValidationAttribute abstract class, which will be used to validate any domain object that inherits from the BaseModel

class MaxLengthRule : CustomValidationAttribute
{
private int min;

public int Min
{
get { return min; }
set { min = value; }
}
private int max;

public int Max
{
get { return max; }
set { max = value; }
}

internal override bool IsValid(T value)
{
if (null == value)
return false;

string text = value.ToString();

return (text.Length >= min && text.Length <= max); } }


NotNull rule : This rule checks if a property is null.

internal class NotNullRule : CustomValidationAttribute
{
private object value;

public object Value
{
get { return this.value; }
set { this.value = value; }
}

internal override bool IsValid(T value)
{
return (null != value);
}
}

Let us now decorate our POCO Customer class with our validation rules. The code below show how we will decorate our domain object with attributes of our validation rule :


public class Customer : BaseModel
{
private List <> statements = new List();

[NotNullRule]
public List <> Statements
{
get { return statements; }
set { statements = value; }
}
private List <> accounts = new List();

[NotNullRule]
public List <> Accounts
{
get { return accounts; }
set { accounts = value; }
}
}

Note that we have decorated our customer class with our NotNull Validation rule. This rule ensures that the value of the properties is not null.

Creating the The CustomValidationAttribute class

This class is the base class of all the validation rules that are used in this example, and it holds the entry points to the Validate method which takes the model as parameter. The following code snippet is the full code definition of the CustomValidationAttribute.


[AttributeUsage(AttributeTargets.Property)]
abstract class CustomValidationAttribute : Attribute
{
abstract internal bool IsValid(T value);

internal static bool Validate(T model) where T : BaseModel
{
Type modelType = GetModelType(model);
List validities = new List();

foreach (PropertyInfo propertyInfo in modelType.GetProperties())
{
object[] customAttributes = propertyInfo
.GetCustomAttributes(typeof(CustomValidationAttribute), true);

object value = FindValueByRule(propertyInfo, model);
foreach (CustomValidationAttribute attribute in customAttributes)
{
validities.Add(attribute.IsValid(value));
}
}

return validities.TrueForAll(el => el == true);
}

private static Type GetModelType(object instance)
{
return instance.GetType();
}

private static object FindValueByRule(PropertyInfo propertyInfo, object model)
{
return propertyInfo.GetValue(model, null);
}
}


The code above is the full defination of the base validator class. And to use the validation strategy which i have been explaining, we need to make the following simple call which returns a boolean flag that tells us if our Domain model is valid or not .


Customer customer = new Customer();

bool isValid = CustomValidationAttribute.Validate(customer);


The code above returns a boolean that indicates if the validation operation is valid or not. Happy domain validation.


Friday, September 26, 2008

Unifying Object Oriented Platforms

The Object Oriented Programming ERA is a never ending story. New platforms are being created every day. Older platforms are considered outdated, but we see them struggling for supremacy over newer and innovative ones. When Java formerly known as OAK technology joined the OOP party in the 90's, its aim was to support consumer electronics such as remote control, PDA's etc. But unfortunately, Sun microsystems lost's the bid for the consumer electronics project, which was the motive behind java. Yet, the JAVA technology quickly moved into the internet platform, seeing that java is promising in the internet era gave birth to the Java applets (an application that can run within a web browser). Java became the talk of the development industry, little did we know that some people somewhere in Redmond street are planning to revolutionise the way we write software using OOP.

The promises that Java made

Java promised to be the platform indepent OOP that will run on any operating system, mobile phones, wrist watches etc. This promise was meet with the different types of JRE (Java Runtime) that handles the native interpretation's of the Java's Bytecode into the native language. Java did not stop there but made its way through to the enterprise and server-side programming language. When J2EE was firnally introduced into the enterprise market, java became the popular platform amongst industries. Although java's cross platform features suffers performance problems over native languages because the JVM ( Java Virtual Machine ) had to be responsible for the lower level work that would have been done by the OS if it were to be a native language like C. But JAVA loyalist did not relent on the bitter side of the java technology, they concentrate more on the better side, which have seen java as the champion from inception.

But java seems to be losing out, because C# as an Object oriented Programming language is moving as fast as possible to solve the problems of software development using OOP. Still both JAVA and C# are learning from one and other.

And there was .NET

Java solves the problem of multiple platforms, but not that of multiple languages. In this our industry that is full of several programming languages, standards and patterns. There is no silver bullet that will unify these platforms. People writing java should be able to exchange components with people writing c#. These components should interoperate without any native tweaks or call.

Microsoft understands these trouble's very well, and that is why the .NET approach is to unify programming languages that targets the .NET platform. Now at last we can all invest in any kind of languages while we still use our older systems because they integrate very well with the newer ones.

But did .NET solve the problem?

.NET is not really platform independent, because .NEt was built for the windows operating systems. But there is a MONO >> http://www.mono-project.com/Main_Page project concentrating on making .NET runtime on the Linux, Solaris, Mac OS and even windows operating system. The news is Mono sponsored by Novell is an open source project and it is based on ECMA/ISO standards, the same standard for C#.

Bringing all platforms together

Time as come for us all to stop tying our developments efforts with platforms and or runtimes, time has come for us to stop using the re-inventing the wheel approach, because we can all leverage our existing systems even if we are to move from one platform to the other.

IS Mono the answer?

Mono can run .net languages as well as java, phython, Boo, PHP, javascript and many more http://www.mono-project.com/Languages . This is the long awaited universe for the software development community. A unified approach that targets different platforms and natives. Can you imagine yourselves speaking french without learning french, thats the MONO approach. We have waited so long for some thing to bring us all together, and here MONO presents itself as an opportunity.

MONO is not really the answer. I my experience of software developments and use of various technologies to solve problems, the real problem we are facing is complexity. I am very sure there is some kinds of complexity behind the use of MONO. What about learning curve, should we throw away our over the years experience in other platforms for another platform. Do we really need an interpreter for all languages? Do we need a language converter instead?

Monday, September 8, 2008

How to implement a generic WCF message interceptor:

In a changing business environment and business processes; changes affects not only the entire work force, cooperate partners, the way businesses are done and of course our tools that makes our work life better. We are left with the choice of ensuring that our legacy or existing system inter-operate well into our new ways of life and they deliver the expected result that yields expected ROI. Because for today's business to survive, we need to carry along with us our efforts of yesterday, we need to have decoupling in mind when we are integrating services for a wider business sector. These were the thoughts in Microsoft when it first launched the framework (WCF) that shook the whole distributed computing industry.

Windows Communication Foundation is a Microsoft initiative of building Service Oriented Applications that leverages yesterdays and todays technologies, in fact it complements existing distributed technologies. The WCF (Windows Communication Foundation) infrastructure is built into .NET 3.0/3.5 and can be leveraged for different kinds of communication types/endpoint, that ranges from Message Queue, TCP communication, standard webservices, service based webservices. The most commonly used approach is the Service based where you can host it as .asmx or .svc.

In a typical WCF implementation, we have the following :

  • Datacontract. Represents the data that will be wired from one endpoint to the other. Any CLR compliant type marked as datacontract can be serialized and wired over the communication protocol.
  • Message Contract : Think of messages as envelopes, that wraps up our data/data contracts over the wire. Our message have a body and header section. It is analogous to SOAP envelope, in-fact it is converted to a SOAP envelope when the message serialization begins at runtime.
  • Service Contract : Here is the interface that is marked [ServiceContract] which can be called remotely by clients of the service. In our service, we will implement this interface and provide business functionalities that clients will consume and call remotely. Services can expose one - To - Many endpoints (Listener point), and more than one service operations can be exposed by an endpoint.
  • Service Endpoint : Think of an endpoint as message gateways for services that are hosted. A service can be configured to listen for request in different endpoints. A service Endpoint has an Address, a Binding, and a Contract.
  1. An address that describes the location where messages can be sent.
  2. A Binding specifies the type of communication that an endpoint can make using protocols such as HTTP, TCP, MSMQ and security requirements such as SSL, WS-Security etc.
  3. Contract : The binding contract are sets of messages that defines sets of operations that can be accessed by the service clients.

Message Oriented Concept In WCF
I mentioned above that the service contract defines operations that are used to send request and response messages over the transport layer defined by the service endpoint. In a simple client/server architecture, messages can be coupled between sending interface and the receiving end. That is there is a stub and skeleton relationship between the service and the consumers. Although, there is noting wrong in the stub and skeleton approach, that is a client\consumer keeps a stub or proxy of the services that it can talk to. This means when a service changes the client is regenerated using svcutil.exe tool for re-generation of the service stubs.

In an ever changing world of buisiness, we need to build services with simplycity in mind, conformance to changes. How do we allow a generic communication between our service and the rest of the world. Thinking about generic-ability of our services, we need to create an interceptor service that sits in between our clients and services, although the contract binding will be between our client and service, but the address binding will be between our client and the interceptor.

Having understood the basics of the WCF fundamentals, we need to now understand how to pass messages in a generic context.

The Interceptor Pattern
This pattern came into mind when i fell deeply into a WCF scenario that requires changing of service messages or re-delegating a service request to other service other than the requested one. Also you can use this pattern to achieve load-balancing amongst services.

One of the reason's why you might want to use an interceptor pattern is the following :
  • Expose a single entry to the entire world with the interceptor/facade.
  • Do load - balancing with the interceptor.
  • Expose a REST (Representational State Transfer) to the world while your back - end service supports WCF semantics, so you need an interceptor to bridge the gap between the pure xml and soap messages required by the WCF services.
  • Manipulating/ validation on the facade/interceptor end before delegating to the actual service.
  • A form of clean architecture design pattern
  • Securability as single entry is used as the entry point. Security is focused on that single entry instead of all
  • etc
The list above goes on and on without end, and there isnt a limit to what our services mediator can achieve. So how do we do the coding (Good Question) : so lets start with defining our service contract :


[ServiceContract(SessionMode = SessionMode.Allowed)]
public interface IRequestDispatcher
{
[OperationContract(IsOneWay = false, Action = "*", ReplyAction = "*")]
System.ServiceModel.Channels.Message ProcessMessage(System.ServiceModel.Channels.Message message);
}


Note the ProcessMessage operation in our IRequestDispatcher contract, this operation accepts and return a Message from the System.ServiceModel.Channels namespaces and it is the only operation that we need to generically or intercept message calls from client and server.

You will also note that the ProcessMessage method is decorated with [OperationContract(IsOneWay = false, Action = "*", ReplyAction = "*")]. This is to tell our service that our method (ProcessMessage) can process any operation call from different client, by marking it Action = "*" and ReplyAction = "*" . This is the basic of our interceptor strategy.

The Big Picture
Lets have a simple picture of what we want to achieve. We have two WCF services one accepts a string and returns a string, the other accepts an object and returns a string. This services are our back office operations that is open to services that are within our domain. Assuming we want to expose this services to the entire world, we will end up exposing two services. So, if any of the services changes, that means the entire world will have to change with it, this is quite burden some. So, instead of exposing the two services, we have to expose only one service that is generic to all other services, and this service communicates in REST - FUL manner (pure XML and no soap messaging).

Lets define the big picture of



Wednesday, September 3, 2008

Asynchronous State Machine Pattern. (The C# Way)

Experienced Software developers/programmers and architects enjoys separation of concern approach when building reusable components using the domain engineering practice.
We all understand that as part of separating application concerns, we need to ensure that our applications are purely decoupled away from dependencies and we need to ensure we properly inject those dependencies without coupling.

As we mature in decoupling our applications and components, we also must understand that, as part of dependency injection, we need to understand the underlying theories behind the publisher and subscriber pattern (Popularly known as fire and forget). Experience have shown that writing application without decoupling in mind is a way of creating problems for the nearer future, this approach is referred to conventional based application engineering. Most developers win and rush to market by launching an application that is very difficult to debug, an application that is overly designed , an application that is very tightly coupled and does not give room to changing and demanding business cases.

We can imagine two processes that handles heavy jobs (like handling about 50,000 calls in a day) coded in a more tightly coupled manner, these processes, by all standards, can handle their concerns differently without one waiting for the other to finish executing. In a service oriented architecture, where business processes are shipped/designed as web services/windows services, we need to carefully understand when to use an asynchronous call and a synchronous call, because of the load involved in processing a request.

Scenario 1
Let us all look at this simple analogy: We have a Business Process Services that handles our business processes like our internal/ and external business concerns. Our business process services comprises of the following functionality :
  1. Register new customer
  2. Register with payment gateway
  3. Send notifications
  4. make periodical payment
  5. Calculate due payment
  6. Daily status monitoring.
Before i go ahead talking more about our area of concern, i would like to introduce a simple architectural diagram of our case study, so that you can catch a glimpse of what big picture i have got in my mind. Here we go, architecture :

The architecture digram above, shows that our business process handles several business functionalities, so it wont be wise to do a synchronous call on our business service's. One of the scenario that we may need to use the separation of concern strategy i have been talking about is when we happen to be in the following scenario :

The daily processor runs.
The status functionality Begins.
The payment is calculated.
Notification is sent. (Based on status).

At the end of each business cycle processing, i mean, our business logic processing, the end task is to send a notification (email). If we have more than 40, 000 status checks, which in turn leads to thousands of business criteria to meet. I mean if we call the method in our service (CheckStatus()) 40, 000 times, a single call will invoke several operations, several operations will launch other legacy stuffs, other legacy stuffs will have their own custom built operations as well, we can see that for a single operations, the load on the Business service is very high, what about for 40, 000 operations. As we must remember, the process calling our business service is also tide with our service until the notifications is sent.

It would be very nice to tell our business process that a customer needs status checks, and thats all, we do not need to wait for our business process to tell us yes or no. All we are satisfied with is that we handled the call to business process. Then asynchronously, our business process will handle all the dirty work without locking its caller. So we now understand where to use asynchronous communication, let us diggit with code :


First of all we need to create the event argument that is raised when a status has been meet. Note* i wont be writing full code here, as readers are assumed to be an experienced developer.



public class StatuChangeEventArgs : EventArgs
{
CustomerState CustomerState{get; set;}

public StatuChangeEventArgs(Customer customer){ //Initialize here. }
}

public enum CustomerState
{
Stateless = 0,
CreditCardDetailsExpired = 100,
PaymentNotice = 200,
PaymentFailure = 300,
CanCalculateFirstPayment = 600,
PaymentChangeNotified = 1000,
DefaultState = Stateless,
}



The code above is the kinds of state that our customer can fall into, in our scenario. No let us see the State engine implementation : First how to we fire an event to our state handlers that need to consume the events and do other processing based on these event/state that was raised. We need a delegate at first :



public delegate void CustomerStateUpdate(object source, StatusChangeEventArgs eventArgs);



Now we have successfully defined our delegate that serve as the template of our event handler. Remember, we are handling different customer state asynchronously, that is, the application or service that send us the customer object will not know about our processing and it wont wait for results.

Now we need to implement the state Publisher, that is the class that scan a customer object, fire the CustomerStateChangeEvent, and also registers interested subscribers to the state event. Here is a simple structure :



public class CustomerStatePublisher
{
public event CustomerStateUpdate CustomerStateEvent;
StatuChangeEventArgs eventArgs;
Customer customer;

public CustomerStatePublisher(Customer customer, StatusChangeEventArgs eventArgs)
{
this.eventArgs = eventArgs;
this.customer = customer;
}

public void AddSubscriber(CustomerStateUpdate subscriber)
{
CustomerStateEvent += subscriber;
}

public void RemoveSubscriber(CustomerStateUpdate subscriber)
{
CustomerStateEvent -= subscriber;
}

public void OnCustomerStateChanged(CustomerState customerState)
{
//We are not processing stateless customer
if (customerState == CustomerState.Stateless)
return;

if (null != CustomerStateEvent)
{
eventArgs.CustomerState = customerState;

//CustomerStateEvent(this.customer, eventArgs);

Delegate[] customerStateHandlers = CustomerStateEvent.GetInvocationList();

foreach (Delegate handler in subscriptionStateHandlers)
{
CustomerStateUpdate eventHandler = (CustomerStateUpdate)handler;
eventHandler.BeginInvoke(this, eventArgs, new AsyncCallback(AsyncReturn), eventHandler);
}
}
}

private void AsyncReturn(IAsyncResult result)
{
CustomerStateUpdate eventHandler =
(CustomerStateUpdate)result.AsyncState;

eventHandler.EndInvoke(result);
}

public void ProcessCustomerState()
{
RaiseStateChangedEvent(customer.CanCalculateFirstPayment());
RaiseStateChangedEvent(customer.CanCreatePendingPayment());
RaiseStateChangedEvent(customer.CheckCreditCardState());
RaiseStateChangedEvent(customer.OweMoney());
}

public void RaiseStateChangedEvent(CustomerState customerState)
{
if (customerState != customerState.Stateless)
OnCustomerStateChanged(customerState);
}

}



The class structure above is very simple to understand, so i will just point out where the asynchronous functionality is our code. The OnCustomerStateChanged method of this class calls all the registered subscriber to this event asynchronously, by getting the invocation list from the CustomerStateEvent (CustomerStateEvent.GetInvocationList();) iterating through it and call the BeginInvoke asynchronously, at this point, in our real world implementation of this, we should just detach from our caller service, and allow the asynchronously operations to run.

Now, since we have fully defined our CustomerStatePublisher class, we need to fully understand how we are going to register subscribers, and how we will begin, processing of our asynchronous state machine. The following code describe how we will register a subscriber to the CustomerStatePublisher, and how we will begin state processing :




//Create an instance of the Customer StateChangeEvent
CustomerChangeEventArgs eventArgs = new CustomerChangeEventArgs();

//Create an instance of the customer state publisher
CustomerStatePublisher publisher = new CustomerStatePublisher(customer, eventArgs);


publisher.AddStateObserver(new CustomerStateUpdate(StateHandler.CalculateFirstPayment));
publisher.AddStateObserver(new CustomerStateUpdate(StateHandler.PendingPaymentChanged));
publisher.AddStateObserver(new CustomerStateUpdate(StateHandler.CreatePendingPayment));
publisher.AddStateObserver(new CustomerStateUpdate(StateHandler.TrialWillExpireNotification));
publisher.AddStateObserver(new CustomerStateUpdate(StateHandler.CreditCardExpiryNotification));
publisher.AddStateObserver(new CustomerStateUpdate(StateHandler.FreeServiceNotice));

publisher.ProcessState();



The rest of the code is to define the subscribers, that is the StateHandlers that handles the state based on the state it can handle. A simple example is an implementation of one of the state handler.



public void CustomerCreditCardExpire(object source, CustomerChangeEventArgs eventArgs)
{
if (eventArgs.CustomerState != CustomerState.CreditCardDetailsExpired)
return;

//Execute statebased functionality here.
}



Having done the following, when control hits the publisher.ProcessState(); method, the whole state processing begins and a befitted state handler is found, the publisher now delegate the handler to handle the state. I would require more suggestions from you all. Thanks.

Monday, September 1, 2008

Introducing .NET Throwable Pattern (The Exception By Contract Class).

If you are a Java developer, do not expect that i am referring to the Java Throwable class, which is the ultimate superclass for all errors and exceptions thrown by the JVM (Java Virtual Machine). I must confess that i love this name, and thats why i am sticking to it in .Net.

Most .NET application developers, architects, programmers etc understand fully well the use of the throw keyword. The throw keyword can be used to throw new exception in a catch block, it can also be used to throw the actual exception while still retaining the stack trace. I may need to explain with code :

Here is an example of the throw keyword :

1. Throw new Exception :


try
{

}
catch(Exception x)
{
throw x; or
throw new IamLovingItException("I am loving it", x);
}


The code above makes use of the throw keyword to throw a new exception different from the originally raised exception. This ignores the stack trace and creates a new exception to be thrown to the caller.

2. Just Throw.


try
{

}
catch(Exception ex)
{
throw;
}


The code above throws the same exception including the stack trace that was raised in the try ... catch block. Whatever situation you find yourself, using option one and two is good but not better, when we are dealing with a business focused application that makes use of throwing and handling of exception to communicate changes or business constraints to the caller application. An example of this will be AccountBlockedException, ZeroBalanceException, TransactionRolledBackException : There is no end to the ways we will be using custom exception handling.

A typical Scenairo. (Scenario One)

You have created a payment webservice (Lets say a WCF service) that serves as an abstraction layer over any kind of third party credit card vendors like PayPal, WorldPay, Nochex etc Our Payment webservice can be configured to use any type of payment gateway because it is very generic and open to extensibility.

Now, any thing can go wrong while processing payment or registring our card details via the Payment service abstraction layer. Lets assume we are trying to process £100 , and along the way, the real payment server shuts down, what do we do in this scenario :

Option 1: we throw the same exception that was thrown by the payment gateway
Option 2: we re-brand that exception and bake a new one from it.
Option 3: we handle that exception in our custom Payment service, and return status (As Enum), to our clients.

Think for a minute, which approach is best in our scenario. Even if the payment gateway did not throw an error, but return a status like Payment Failed to our custom Payment service, do we act upon that status and throw a new exception from there, but how do we act upon the status, the best bet may be to do :


if(paymentGateWayStatus == Status.PaymentFailed)
{
throw new PaymentFailedException("Failed");
}
else if(paymentGateWayStatus == Status.ServerShutDown)
{
throw new PaymentServerShutDownException("Shut Down");
}


There, we can go on and on until we have a very messy if statement with many throw exception block, we can even have this statements in almost all the methods implemented by our Payment service abstraction, if not all. What do we need to do in this case to reduce the amount of redundant code that we have?

I have got an excellent Idea. Why not let us create a pattern for this scenario, at least we are trying to avoid repeatability, we need to slve it with a pattern. To create a pattern, you must have been doing one thing for along time, before you realize that it is time consuming, and you would want to make it re-usable in code wide, applicatuion wide etc.

The New Throwable Pattern.

in our case, we need to create like a factory class called Throwable, this class has the abilities to check a boolean statement and decide on wether to throw an exception or not. This class can navigate through a switch statement to see which status matches the given status, and throw an exception from there. To go strat to the point, the throwable class is defined as follows :


public static class Throwable
{
public delegate bool Is();
private static void Throw(Exception exception)
{
throw exception;
}

public static void CauseThrow(T exception)
{
throw exception as Exception;
}

public static void ThrowWhenIsNull(object instance,string message)
{
if(null == instance)
throw Activator.CreateInstance(typeof(T), message) as Exception;
}

public static void CauseThrowOnTrue(Is anonymous, string details)
{
if (anonymous.Invoke())
throw Activator.CreateInstance(typeof(T), details) as Exception;
}

public static void CauseThrowOnTrue(bool status, string details)
{
if(status)
throw Activator.CreateInstance(typeof(T), details) as Exception;
}

public static void CauseThrowOnFalse(bool status, string details)
{
if (!status)
throw Activator.CreateInstance(typeof(T), details) as Exception;
}

public static void CauseThrowOnFalse(Is anonymous, string details)
{
if (!anonymous.Invoke())
throw Activator.CreateInstance(typeof(T), details) as Exception;
}

public static void CauseThrow(ResponseStatus status, string details)
{
switch (status)
{
case Status.ERROR:
Throw(new PaymentServerErrorException(details));
break;
case Status.INVALID:
Throw(new InvalidPaymentErrorException(details));
break;
case Status.MALFORMED:
Throw(new MalformedPaymentException(details));
break;
case Status.NOTAUTHED:
Throw(new AuthenticationException(details));
break;
case Status.REJECTED:
Throw(new PaymentServerErrorException(details));
break;
default:
return;
}
}
}


The above code is our Throwable class, and we can call the Throwable in our code as the following :


Throwable.CauseThrow(new CardException("Card is not valid"));

Throwable.CauseThrowOnFalse((status == Status.AUTHENTICATED), "Not AUTH");

Throwable.ThrowWhenIsNull(creditCard,"Invalid credit card");


Throwable.CauseThrow(status, "Testing");


We can see that the Throwable approach is more cleaner and reusable across. You can contribute more to the implementation because this is just a simple prototype .

Cheers and enjoy.

Friday, August 29, 2008

.NET Primitive Wrapper Classes Where are thou?

In my day-to-day experience with code, forum users, other developers, i have been asked this question (Wrapper classes for primitive type) a thousand times before and now. Most developers especially the one's coming from Java Land (Like me, i am a Java Lander and still i am) seems to bring with them the Jar of Java coffee, and when they successfully arrive NLand, they realize its a different ball game here, because NPeople are not coffee drinkers, they are business men. Lets cut the story of N and J short, because the similarity and the dissimilarity of both lands is what makes them different.

We will not argue about the two lands today, but try to work around a simple primitive Wrapper for int, lets call our wrapper BigInteger or Integer (I will choose Integer or something like IntegerWrapper since .NET people may want to include Integer class as part of the base class libraries in the upcoming and not so far future. In Java, int is mutable because you are allowed to change the value of int anywhere in your code.

In Java we have several Wrapper classes which i will like to metion below, they wrap-up primitive data - types and have an object representation of it. The following class are some of the primitive wrapper classes in Java.

  1. BigDecimal
  2. BigInteger
  3. Byte
  4. Double
  5. Float
  6. Long
  7. Integer
  8. Short
All of these wrapper classes described above inherit from an abstract class called Number (You can sense some intuitiveness of java here). So in java, a primitive integer type can be used as follows :


int value = 200;
Integer bigInt = value;
value = bigInt;


Although wrapping and un-wrapping or autoboxing and unboxing (NLand calls it Boxing and UN-Boxing. But to object) primitive data types has a performance implications, but there are some situations that we cannot do without them. So how to we bring this type of functionality to .NET, how do we ensure that our primitive types can be treated as a reference type. First let me point out some few facts about what we have in the .NET API (Application Programming Interface).

Boxing and UNBoxing

Well, with the little approach we described above, we can fully understand that in .NET we have the term boxing and unboxing. This allows us to use an value type as an object and re-casting the value type from an object to its primitive state : The following is valid :


object value = 200; //Store an integer
int intValue = (int) value; //returns the value 200 to an int via type casting. It
is not automatic casting.


The above snippets shows how we can do boxing and unboxing in .NET, but where is the performance implications in this? of cos there is, when you cast a primitive type to object in .NET, an reference is allocated in the garbage-collected heap, that means an object memory is allocated for it. Let us take another simple approach :


object value = 200;
object value2 = value;

value2 = 500;
value still remains 200.


The code above retains the content of value and creates a new object in the memory heap when we assigned value to value2. We would have thought that the two objects would have a reference to a single object in memory, but the data is duplicated. So we need to be carefull when dealing with boxing amd unboxing this way.

Also we can use the System.ValueType class, which is the base class for all .NET value types, still this approach does not guarantee type safety and and it encourages more type casting and also incur the performance implications as the boxing and un-boxing approach.

So what are we going to do?

we are going to create a custom Integer Wrapper for .Net, and we will ensure that it is used like its java counterpart. So here our code goes :


public class Integer
{
int value = 0;

public Integer(int value)
{
this.value = value;
}

public static implicit operator Integer(int value)
{
return new Integer(value);
}

public static implicit operator int(Integer integer)
{
return integer.value;
}

public static int operator +(Integer one, Integer two)
{
return one.value + two.value;
}

public static Integer operator +(int one, Integer two)
{
return new Integer(one + two);
}

public static int operator -(Integer one, Integer two)
{
return one.value - two.value;
}

public static Integer operator -(int one, Integer two)
{
return new Integer(one - two);
}
}


The code above is a representation of the Integer wrapper class in java in .NET. So, having created our class above, we need to see how we can utilize it. Below is how we can make use of it.


Integer integer = 345;

integer = 10 * 11;

int value = integer;


Now we have our a reference type for integer type in .NET. Note, you can overload other operators to make sure your reference type can be used with other operators. Enjoy your Wrapper.

Thursday, August 28, 2008

Command Pattern as Workflow Pattern

In today's development efforts, we need to ensure that we write better code that really meet our demands, code that can be executed independent of another, code that are separated by what they do (Concerns), a pragmatic code that is very safe to execute (Reliability). An efficient code that handles all error proactively.

To write this kind of code, you need to start dwelling into design patterns or practices that will make life better for all of us. One of this pattern is what i will be discussing today (The Command Pattern). The command pattern as its name implies, is a practice of encapsulating chunks/subsets of operation/workflow into a reusable format that can be used independent of the environment it was called. Command Pattern From Do Factory Website

A typical scenario is this : You are writing a webservices that want to execute the following dependent operations as an atomic operation :

Get details of products/services
Calculate The Payment due.
Make Payment. By calling external webservice (Like payment Gateways).
Update Your Lagacy application.

A good look at the cases above should tell us that these operations will be executed within the CallContext of a method. You can imagine how bulky and spaghetti the underlying code will be, and i can see many if statements, many try catch blocks, many many code horrors. Anyway, let us take another look and approach on the problem on ground, what if we make the processes that we discussed above as a single unit that depends on the statuses of one and other.
As a single Unit we can have the following :


ProductServiceCommand
CalculatePaymentCommand
MakePaymentCommand
UpdateLegacyCommand


The following commands can be written in separate classes, that means that they can stand alone. Before i continue further with this design practices, i will now introduce the Command pattern by introducing the base class below:


abstract public class BaseCommand
{
CommandReciever commandReciever;

public BaseCommand(CommandReciever commandReciever)
{
this.commandReciever = commandReciever;
}

protected CommandReciever CommandReciever
{
get { return commandReciever; }
}

abstract public void Execute();
}


The class above marks the structure/template of our commands, all commands will inherit from this class, and they will override the abstract Execute method. All command will share one instance of a status object or Reciever object (In our context, it is CommandReciever). This object will be used to share information among commands. For example, we may want to check the status of CalculatePaymentCommand before we execute the MakePaymentCommand. If the calculate payment command fails, instead of returning/throwing an exception, we can simply return a status that the other commands can consume. Let us assume that our status is as follows :


public class CommandReciever
{
public CommandStatus ProductStatus{get; set;};
public CommandStatus MakePaymentStatus{get; set;};
public CommandStatus PaymentStatus{get; set;};
public double Amount{get; set;};

public CommandReciever()
{
productStatus = CommandStatus.NOTSET;
makePaymentStatus = CommandStatus.NOTSET;
paymentStatus = CommandStatus.NOTSET;
}
}

public enum CommandStatus : int
{
NOTSET = 0,
OK = 1,
FAILED = 2,
PAYMENT_SUCCESSFUL = 3,
PAYMENT_SERVER_ERROR = 4,
PRODUCT_UPDATED = 5,
UNSUCCESSFUL_PAYMENT = 6,
MAY_CONTINUE = OK,
MUST_DISCONTINUE = FAILED,
}


The commandReciever above will be shared amoung commands, so that state information of other comman can be seen across through the commandReciever object. Now let us delve into implementing our Commands : The following code blocks is the structure of our commands, in our case, i will be describing only one for space and readability on this post:


public delegate void StatusChangedEventHandler(object source, EventArgs e);
public class PaymentCommand : BaseCommand
{

public event StatusChangedEventHandler paymentStatusChanged;

public PendingPaymentCommand
(CommandReciever commandReciever): base(commandReciever)
{

}

public override void Execute()
{
if(commandReciever.ProductStatus.OK)
{
//Execute Payment Command Here.
}
}
}


The class above is a simple template of how the class structure of our commands will be implemented, so let us assume that all other commands inherit from BaseCommand and all commands have a constructor that takes an instance of commandReciever as argument. Also our assumptions is that, they all implement the Execute method.

Now that we are clear about how our four Commands will be implemented, then we will do the calling as follows :


CommandReciever commandReciever = new CommandReciever();
ProductServiceCommand productCommand = new ProductServiceCommand(commandReciever);
CalculatePaymentCommand paymentCommand = new CalculatePaymentCommand(commandReciever);
MakePaymentCommand makePaymentCommand = new MakePaymentCommand(commandReciever);
UpdateLegacyCommand updateLegacyCommand = new UpdateLegacyCommand(commandReciever);


The initialization code above can be used as follows :


productCommand.Execute();
paymentCommand.Execute();
makePaymentCommand.Execute();
updateLegacyCommand.Execute();


At last we have successfully separated each command as workflows that can be executed independently or as a unit of operation. Since commands share statues, a command will check if its other commands have executed successfully before it executes. Commands can also register events on one and other, so we can have some command registering themselves as a listener to a command for status change event notification. There are many usages of the command patterns, this is just one i have explained (Command pattern used as Workflow pattern.( Enjoy patternising).

Friday, August 15, 2008

Persistent Ignorance. Is that another acronym

Gone are the days that we couple our data access code within our business layers/logics., the days of manually writing SQL scripts is over and stored procedures are becoming un-popular in Object Oriented Programming because the paradigmn is shifting from data cetricness to object focuseness. We write code in the modern day using modern tools and languages like C#, Java, Smalltalk, C++ etc These platforms or languages follow common design structure which is termed as object orientation. With the fact that we use OOP, we always want to use the OOP semantics in our coding culture, analysis, and software designing.

In other to separate data from OOP,
we have seen the birth of many ORM frameworks (Object Relational Mapping) and code generations and yet there are still many on born frameworks comming into play very soon. Soon we will feel the quantum leep from data centeredness to object orientation. As part of the shift in paradigmn, we saw the birth of AOP (Aspect Oriented Programming) yet many are coming.

Our focus today will be about Persistent Framework and how the industry have tried to make them persistent ignorance. For those who dont understand, persistent ignorance is the ability of our domain model to be ignorant of our persistent framework. The following code is not persistent ignorant :

public class Foo : IPersistable;

The class Foo above, is a domain entity class, but it is forced to implement IPersistable because its a requirement by the underlying persistent framework. If you want to change persistent farmeowork then the problem comes, you will change so many code to make your class compliant to the new persistent farmework.


M
ost Persistent Frameworks are not Persistent Ignorant, how do we plan for cross ORM frameworks, when we decorate our entities with attributes, we must implement IBigBoss, we are forced to make our POCO or POJO abstract, we use mapping xml file that are embedded into our assemblies (The XML approach is the father of couping because It goes with our transparent POCO'S as embedded resource. This does not allow for Refactoring with the range of refactoring tools we have today).

As we proceed,with the chaotic nature of our industry, we realize there is no silver bullet, even though we longed and waited for the miracles to happen, some of us have realized that we cannot wait forever, something need to be done, that is where Open Source framework comes into play, and different types of AOP (Aspect Oriented Programming) framework's filled the whole programming community.

Recently, i am searching for a Persistent Ignorant ORM Framework, and i realiazed that there is none, even if there is any, the framework will be descriptive [Meaning that you must obey the do's and dont's].

There have been several long debates going on in the Object Persistent Industry, but there havent been a silver bullet, and there wont be any. Relational is not Object period. So we still need a way to tell the Persistent Engine how the transformation will be done. If Microsoft claims to give us such functionality without the use of Visual Studio to hide the intricacies of the mapping, we are ready to sing the "oh happy day".

Most of us (The developers, Analyst, Architects etc) are the root causes to what is happening to software engineering today, we always want a rocket scince, something that cannot be done without plumbing, but have we forgotten that even the JVM's OR CLR's of today is hiding somethings away from us if not, why cant we all start programming in Bytecode or IL or ILasm. Even OOP have its own semantic's which we must follow, if we must use the language. So i wonder why we cannot live by the semantics of ORM mapping and yet we ask too much of the framework.