<$BlogRSDURL$>

Friday, December 10, 2004

Everything on the internet should have a URL? 

From the above quote from Tim Berners-Lee recently reemphasised by Adam Bosworth (ex-BEA now Google) in a speech recently (see http://www.adambosworth.net/archives/000031.html) I started thinking about how B2B services work and how they could work differently with a bit of lateral thinking and an understanding of the requirements for the services offered.

Today most B2B services are of an Asynchronous nature. They involve state-full processes and are usually based on a model of establishing an initial business request followed by a series of milestones indicating how the request is being handled. In most cases the timescales for these processes are based in hours rather than minutes.

Our current model is based on message passing that when they are sent are purely bits on a wire. It does not have a URL, it can not be addressed even when it arrives at its destination. Here it is consumed and fed into the receiving organisation. If the destination is not responding the result is bits fading in the ether. Should the destination system come back to lie it does not know that the message ever existed. It can not of itself resynchronise to a state that allows it to continue.

Another model is to use the idea behind publish and subscribe. In this model the person with some information that it wants to make available publishes this on to a web site. The expected destination has already subscribed to know about this information. The destination will then be informed that a change has happened. The result of this is that the destination goes to the publisher’s site at collects the information. If for some reason the destination has not received the prompt it can reset itself by going to the publisher’s site and picking up the set of information that they have not previously received.

There is a set of simple technology that is being used in large volumes today by millions of people to receive information from the internet. These solutions based are based on simple web technologies, of HTTP, XML/HTML and RSS. These technologies are being used for major news organisations such as the BBC, Reuters and also by small publisher’s who are writing Weblogs.

This technology has now become common place and is regarded as rather low tech. This is a good thing. It is tried and tested and is becoming ubiquitous on the various computing platforms. In most cases the technology is very simple to manage and the development tools are available to make the coders job very simple.

What I envisage is an amalgam of the event based protocols such as Web Services, ebXML Messaging, etc and the publish and subscribe model from news syndication. The aim of using both messaging and static publishing is to allow for a platform that can support high volumes using simple technology and offering high resilience.

Example Scenario
Taking a simple transaction such as raising an order. The default mechanism for sending the order is via a messaging gateway based solution. At the same time the message is despatched over the network it is place in a publicly facing web site (secure if necessary) and an entry in an appropriate syndication feed is updated (a new RSS Item entry). The URL for this stored entity is referred to within the message. This could be directly, the URL is in the message, or indirectly, the URL is derivable from information held within the message.
On receiving the message a receipt would normally be sent back indicating that message had been successfully transmitted. With the use of the Stored Message this is no longer necessary as the receiver now take the message and also stores it at a known location and updates their RSS entry. The reason for this approach is that there is not a receipt to a receipt and if a receipt fails to get through or be sent there is some ambiguity as to the state of the process associated with the messages. In this case it is clear that a message has been received by simply enquiring of the RSS feed. This is a step that the sender can choose to check or not. The sender is free to continue.
When the Receiver processes the order they reverse the process except that the message being sent back is linked to the original message. This could be by using new directory structure for each new conversation or by making sure that the items are linked in the RSS items. When the company who raised the order gets the reply they also link it to the original message on their web site and update their RSS entries.

In the unfortunate event that something goes wrong and the messaging interfaces fail. The party wishing to send messages still attempts to use the messaging route while at the same time keeping the web site based message store up to date. When the Gateway that has failed returns, it will be able to resynchronise with the remote system by using the RSS feed and the Web Message Store. Once it has resynchronised its Web Message Store should be the reverse of the remote systems.

Within this scenario the two ends of the interface have been operating according to the published process. However, there are times when things occur that need to be flagged to the remote system. This might be the fact that a crucial back end system is down or that a series of messages have been sent but were not part of the formal process. Using the Web Message Store and the RSS feed it is possible to keep the other parties systems informed of things that are not directly related to the process but may be service affecting.

Logistics
Using XML messages and storing them increased some companies storage by six times over previous EDI mechanisms. In order to mitigate this the Web Message Store would not store raw XML but compressed XML in a format agreed by what ever mechanism delivers the best compression and can be used by both sides. Typically XML compresses extremely well and the amount of storage required could be similar to that of a reasonably busy Web Forum.

The reason for the Web Message Store is that it moves the messaging interface into something visible and something that can be view both by humans and machines and can be monitored by machines and humans. There are several tools that can aggregate RSS feeds and make them appear as one feed. This would allow a B2B gateway site to have access to all the remote stores for their partners using very low technology solutions. What is more using the amalgam of both Messaging and RSS you allow the point of entry for customers to be a great deal lower. Messaging solutions typically take time to test and integrate. Web Sites are now commonly managed by even the smallest enterprise and publishing information on them is fairly straight forward. This would allow an enterprise to enter the market of B2B simply by placing XML messages in the Message store and making sure the remote system knows to pick them up. With this simple agreement a set of B2B interfaces could be up and running in days. If the interfaces need to become more real time the introduction of the messaging feed could be developed, but is not necessary for most solutions deployed today.

There is another scenario which is the possibility of using a common Web Message Store supplied and managed by a third party. In this model both the sender and the receiver would simply use the Web Message Store as a simple Web Site to which they publish their XML messages. This would allow the third party to take the messages and either despatch them to the other party using a mechanism agreed with them or simply just manage the logistics of the common platform.

Internet vs Intranet
All the discussion above has been around the B2B space, But should not this model be employed within an organisation. Does integration work so well inside enterprises that the need for resynchronisation between systems is not required? Do hubs get out of step? Do we always know the destination of the information that a system publishes.

Inside an organisation the use of the publish and subscribe model could allow for multiple client to consume information using very low technology solutions. The ability for most applications to read XML and to attach to an HTTP stream makes it probable that within an organisation this method might actually have greater effect. No longer would messages have to be sprayed around the company they could be published once and picked up by whoever needs them. The only major discipline in this environment is when the interface being implemented is bidirectional, there is a need for both parties to register with the others site. Once this is done the results can flow unrestricted by downtime.

Withdrawing Messages
With a messaging system it is impossible to withdraw a message should the circumstances of the original transaction no longer apply. With the Web Message Store it is possible to remove entries should they be causing problems or if the due to failure of the receiver the transactions are no longer applicable. This means that when the roll forward takes place at resynchronisation time only the messages that still apply need to be rolled forward.

As the Store is a simple Web Site model removing entries is as simple as deleting web pages. This might be regarded as a good thing in the event of trouble but might be an issue if access right to delete are too freely given.

Conclusion
B2B Gateway that implement messaging solutions can be complex and difficult to manage in error scenarios. By employing an open approach to what messages have been sent and received it is possible to offer a low technology based solution to manage failure and possibly offer a lower entry barrier for users of B2B solutions.

Within enterprises the same issues occur with regard to message based solutions and using a simple store of messages sent and received could allow for basic solution to a complex integration problem.

Thursday, December 02, 2004

SOA, XP and Open/Corporate Source
Recently we have been seeing a push towards the use of an extreme programming methodology. This is based on the notion that it must be possible to build things quicker than we do. At the same time there has been an initiative to put together a consistent architecture based on the SOA model. Alongside this is a realisation that there is an amazing amount of useful high quality code out there in the Open Source market. The emerging picture is also beginning to include the idea of Corporate Source where the techniques of making the source available in public for Open Source is used within a company.

The question that arises from all this is can this all work. Is it possible to gain the apparent efficiencies of the Open Source, eXtreme Programming model within a Corporate environment that thinks it needs a strong yet flexible architecture.

As I thought about all this it became clear that the Open Source model has some important things to offer to a company above just the savings. The main thing it does and XP backs this is that it enables people to relinquish ownership of code as others have to look at it and the result can only be better. If you add the JUnit style test harnesses that would be available integration testers you end up with a completely different atmosphere. No longer is one individual key to any one project, as their code is available for other to decipher and probably re-factor. This alone in my view would be enough to encourage companies to not only use Open Source code in their projects but also insist that developers, even external companies make their code available as part of the deliverable.

Having been down the Open Source route myself there is nothing better than knowing other will see my code and other may use it to make me try to make it good. This encourages one to talk to others about how to do things and to look at others code to solve problems you are experiencing. There is also nothing better than to know that others are using your code either through being asked what it can do or by receiving bug reports. The buzz from having produced something useful is what motivates a great number of programmers. The other buzz that the new programming models produce is that using the Test driven method I have written for the first time complex code that I know works. The relief that this has given means that I can relax when I hear that others are using my code.

So as you can see I am very keen to see the traditional corporate programming model of hiding the code within the team broken and allow for ALL code within a company to be open. It is also worth noting that the Open Source projects are both well managed but also fairly relaxed. Sourceforge.net allows thousands of projects to be created knowing that only a few will be successful and move to a version 1.0 product. This does not matter, it still enables a developer to borrow code from someone if it does the job even if it is slightly rough at the edges.

To enable this corporate sourcing it would be necessary to supply a free to use platform similar in nature to the Sourceforge platform that supplies both places for managing code and the ability to publish web pages. It also have processes for publishing new versions and any associated broadcast and bugs etc. The key to the success of this kind of platform is the relaxed fairly free manner in which it is provided. If corporate policies such as those applied to official web servers were to be applied the result would be less use and a poorer take up of the facilities.

Before going down a wholesale Corporate Source route it is worth asking the question about how much a corporation benefits from hiding its resources in house. If a company published its code for all it did would it be giving away so much that it would in fact be vulnerable in the market. I am not convinced that the average corporation is better off hiding its code. I suspect that in most cases the reason for hiding it is a fear of embarrassment not a genuine competitive advantage. In most industries the desire to reduce costs has led to the introduction of industry standards bodies. If companies put their efforts into producing code for their own use in a standard way and then publish that code, the adoption of the standard would be more widely taken up and the results would be a more widely used standard and probably improved code.

The final point on the Open Source debate before turning to look at SOA is the use of COTS packages and external outsourced programming shops. May be it is time for corporations to ask for the source code to a product when they buy a license. In this way they are able to monitor quality and possibly even temporarily fix bugs themselves, sending them to the supplier in the same manner as Open Source patches. The other effect of this is that the COTS product would no longer be subject to problematic integration points. If the corporation wanted to access data in a none COTS manner they would have the code to be able to test the effect and also build in the hooks they need to do the work. It has been this reluctance of the COTS vendors to support genuine SOA interfaces that has meant slow delivery timescales and poor integration solutions. It would also mean that any extensions the corporation made to the COTS product could be done in such a manner as to not affect the underlying COTS product and therefore improve the likely hood of being able to migrate to new version of the product. It would also provide the corporation with a better understanding of the product and therefore prevent lock in as we see today.
Outsourced programming shops should be encouraged to publish their code in the same manner as the Corporate Source model. In fact it should be a clear advantage of the model as it would enable the quality to be more easily seen and also would allow the outsourced agency to reuse other pieces of the infrastructure more easily.

SOA and all the above some have felt to be mutually exclusive. What is the benefit of Open Source and reuse if there is only going to be one instance of a service. For example if a corporate Main Frame makes a particular function available through a web service, where would publishing the source help? In most cases the Services that would become useful within a corporation are in fact amalgams of previously engineered capabilities. Even the simplest Service might require logging, security and persistence resources. Publishing the code for one service might enable a new version to be built more easily.

For sometime now I have been pushing the model of interfaces that Microsoft have been using for years in building the Windows Platform. In this model a Service is never deleted. API for Windows 95 are still available should ancient programs require them. This model means that new services are built from and deployed alongside old one. This model seems to work fine for a PC but would it work for a corporation? The main issue is access to underlying data and resources. If the data changes under a service, for example and database schema is modified to allow new functions, the service no longer works. In this case the link between resources and services needs to be maintained. By having regression suites such as proposed by Test Driven programming the services that fail after the change would be very obvious. It would then be necessary to reprogram the old service until it worked. By adopting this model the clients of a service would be unaffected by the new introduction and therefore the deployment of new services becomes much less dramatic to the corporation.
By making the code to the original service available to all, the people responsible for the change to the underlying data source can see the effects their change will have and also they can take responsibility for making sure the old code continues to work without having to establish permission from the original owners. So now they not only deploy the data changes and the new services they also redeploy the old service in a new form.

One of the other effects of encouraging openness amongst the development community even if the code is expected to be deployed only once is that it may allow other parts of the business who operate in a similar fashion to build on the previously existing service code. For example if a business is split across countries or divisions, it is very likely that similar functions are replicated. Allowing these other entities to have access to the code would allow them to reuse it even if they have different underlying capabilities. It is my experience that picking up some one else code that works enables me to stand on their shoulders and provide a better solution. Examples abound in the public Open Source environment and the establishment of projects such as the Apache Commons shows that reuse can be a useful bi-product even if the original aim for the code was very specific to one function. IN fact the majority of coding that takes place today is based on copying existing examples and making them fit your needs, there is almost no other way to understand complex libraries that have proliferated over that last few years. Knowing the Java language is easy knowing the right library is the key skill difference between the novice and the expert.

Conclusion
SOA needs to have fixed numbers of resources deployed in a controlled manner. To date successful Open Source projects have tended to be platform based and not specific to a particular data source. However, the benefits of Open Source to the mind set of corporate development are too crucial to not pursue. SOA is also critical to allowing large enterprises to flex the resources underlying services and to enabling heterogeneous clients to access the capabilities offered by the service. Therefore the two models should be seen as supporting each other rather than being mutually exclusive.

Future
There is a clear need for enterprises to move on both fronts. Spending time looking at current screen based operations and providing underlying API to the resources in question is a key enabler for the SOA. This is one of the functions that Microsoft’s API have been very successful. Allowing other applications to call API within another applications space allows reuse at a level never expected when the original application is built. For example the Word spell checking capability is just as useful when designing flowchart as producing email as producing this document.
Providing a free at the point of use platform for code sharing and publication similar to the Sourceforge model is crucial to encouraging developers to share their code. Alongside this is the encouragement for in house developers to use Open Source code from outside business is also needed. This will encourage the right mind set for sharing and seeing the benefits of sharing. This platform needs to offer both simple sharing models such as a repository of file and the use of complex code management tools such as CVS and it like. Before spending begins on such a platform the enterprise should discuss if the sharing in a public manner might have some benefits especially where the desire to use standards across the industry is seen as important.

Lastly, making sure that a suitable simple ‘Google’ like search engine of an enterprises intranet is very important. If this could be combined to return results not just fro the intranet but also associated internet resources the better. The key to making this work is to adopt a decentralised model for most of the key functions. For example allowing the search engine to roam over the whole intranet would enable small projects to maintain islands of information they saw as necessary without hiding their resources from the masses. It is interesting that functions such as Wiki and other free to use capabilities actually encourage a voluntary centralisation and by adopting an open policy to information on such sites the mindset of the enterprise can move from empires with walls to a much more community based environment where sharing is seen as being key to being a good corporate citizen.

This page is powered by Blogger. Isn't yours?