Skip to main content

Web Services between Dot Net and Not Net

If you've only ever worked with Web Services in Dot Net, you could be forgiven for expecting it to be easy to use Web Services to interface with other platforms. In Visual Studio, it's all a bit Fisher-Price: you define your Web Methods, then add a Web Reference to the client and everything ticks along nicely. You don't even see any XML.

Recently I've been working for a client getting a Dot Net Web Service to work with a third-party system build in Perl. I have now discovered there are two sorts of web services:
1) Mickey Mouse Web Services in which the client and server both run Dot Net
2) Proper, Serious Web Services in which the server runs Dot Net and the client runs Not Net (anything else).

Other people in the industry seem to have noticed this too, and the current official term for Proper Serious Web Services is 'Interoperable Web Services'; that is, Web Services That Actually Operate.

Other people have already written lots of advice for building Interoperable Web Services. Here's a few articles I've found useful:
Returning DataSets from WebServices is the Spawn of Satan and Represents All That Is Truly Evil in the World (from Scott Hanselman's blog)
Top 5 Web Service Mistakes (by Paul Ballard, at
Top Ten Tips for Web Services Interoperability (by Simon Guest at Microsoft)

One piece of advice that keeps cropping up for building Interoperable Web Services is to build them 'Contract First'. The teams working on the client and server ends of the web service get together and agree the XSDs that define the request and response of each Web Method. This irons out any problems with supported or unsupported types at an earlier stage. Code is then generated from the XSDs (or the WSDL) rather than the other way round.

The third party people we were working with suggested the contract first approach so we ended up swapping XSD's back and forth. This worked pretty well. When it came to code generation, Visual Studio 2005 provides two command-line utilities, xsd.exe and wsdl.exe for generating code from XSDs or WSDL files.

We went down the XSD.exe route, as we didn't have any tools handy for building the WSDL from scratch. We then built the Web Methods using the objects that had been generated by XSD.exe. This helped somewhat, but we still had a series of 'interoperability' problems getting our Interoperable Web Services to interoperate. I'll describe some of them to give an idea of how inoperable interoperability can be:

Problems With Blank Namespaces

We had a proper Namespace for the Web Methods (which was defined using a [WebService(Namespace = "something")] attribute on the class that held the Web Methods) but some of the classes generated by xsd.exe had XmlRootAttribute() attributes that specified a blank namespace, like this:
[XmlRootAttribute(Namespace="", IsNullable=false)]
It turned out that this lead to the web service expecting a blank xmlns attribute in the incoming SOAP message, like this:
    <ExampleWebMethod xmlns="somthing">
      <ExampleRequestStructure xmlns="" >

When Dot Net tried to call this web method, it had no problems (because it was following the WSDL exactly and so put the blank namespace in). But the third-party PERL guys were hand-coding their request code and were getting tripped up by the lack of a xmlns="" in the SOAP request they were sending. Eventually we figured it out and removed all the blank namespace definitions from the code, so that the XmlRootAttribute looked like this:

We also had to change some of the XmlElementAttribute attributes from


(If we had used targetNamespace in our XSDs, or build a WSDL file and generated code from that, we probably would have avoided this issue)

Problems with SOAPAction

In SOAP 1.1, there is an HttpHeader called SOAPAction that is supposed to get sent along with the SOAP request XML. That is, if you browse to a Dot Net asmx file using a browser and look at the example SOAP 1.1 request, the top of it looks like this:

POST [webservice url] HTTP/1.1
Host: [host]
Content-Type: text/xml; charset=utf-8
Content-Length: [length]
SOAPAction: "[namespace]/[method name]"

... then all the xml stuff ...

where the bits between the square brackets [ ] are filled in with the right values.

SOAPAction is a bit odd because it plays an important part, but its not actually in the XML bit of the SOAP request. It's there so that the server can route the request to the right method without having to actually parse the SOAP to find the method name. But XML fans were a bit miffed about having an important part of their SOAP system not actually in the XML at all, so it was dropped from SOAP 1.2.

The problem we had with SOAPAction is that although it is mentioned in the official WSDL 1.1 definition, no specific format is defined. And guess what?

  • Some platforms, such as CGI web services and PERL, use "<namespace>#<method name>" as the format, e.g. ""

  • Dot Net uses the format "<namespace>/<method name>", e.g. ""
The Perl guys though we were doing it wrong, and we weren't sure why our Web Methods were insisting on a backslash instead of a hash, and we couldn't find any definitive definition of what format it was supposed to be in, and it went back and forth for a while. It turns out the SOAPAction in Dot Net can be overridden by manually changing the WSDL. On the other hand, there was a little line of Perl code that got Perl to use the Dot Net format for SOAPAction and that eventually solved the problem.

If you're using Perl's SOAP::Lite library to call Dot Net web services, this page may help you:
Simplified SOAP Development with SOAP::Lite at PerfectXML

Problems with Geography

The Geography problem was that we had two different teams in different organisations, trying to work together to solve plumbing issues in the low level Http and SOAP. In the end, most of the problems turned out to be pretty trivial. However, because of the time lag between us putting up a new version of our web services, and the other team trying to call them using Perl, and then getting back to us with the results, what should have been trivial troubleshooting took days.

In the end, even though we don't know any Perl, installing it on our local network and running the test code that the Perl guys had provided proved to be pretty helpful. The benefit of being able to run the Perl code in-house whenever we wanted and seeing the results immediately actually offset the cost of not having any Perl skills. When we did need to change the Perl code, a bit of googling always led us in the right direction.

So there's probably a more generalised lesson there: When you're in a situation like this, even if you've never used the other platform, it's likely to be worth setting it up locally just so ideas can be checked and tested in one location instead of two locations having to work together.


Popular posts from this blog

SSRS multi-value parameters with less fail

SSRS supports multi-value parameters, which is nice, but there are a few issues with them. This is how I deal with them. Two of the problems with SSRS multi-value parameters are: You have to jump through a few hoops to get them to work with stored procedures The (Select All) option, as shown above The reason the (Select All) option is a problem is that it is a really inelegant way of saying 'this parameter does not matter to me'. If you have a list with hundreds of values, passing all of them as a default option just seems wrong. Also, if your report shows the user which items they selected, printing the whole list when they choose (Select All) is excessive. So in this post I'm going to show my particular way of: Jumping through the hoops to get Multi-Value params in a stored procedure Adding a single '--All--' value that the report interprets as meaning all the options. Getting Multi-Value params to work with Stored Procedures This is

Copying data to Salesforce Sandboxes using TalenD

A common problem with Salesforce Developer Sandboxes is that they are blank. Really you're going to want some data in there, so there are various strategies for copying data from your live instance to the Sandbox. There are some paid-for solutions - SFXOrgData , Salesforce Partial Data Sandboxes - but if you've got a decent ETL tool you can build your own. There are a bunch of free ETL tools for Salesforce: JitterBit Data Loader is good for quick ad-hoc tasks but the free version makes it difficult to manage specific ETL projects or share projects with other users Pentaho Community Edition - an open source edition of the enterprise version Apatar was a free open source Salesforce ETL which still works but development seems to have stopped since 2011 TalenD Open Studio is an open source ETL tool For the task of copying data from live to a Sandbox, either Pentaho or TalenD Open Studio could be used, depending on preference. Here's a good comparison of the dif

Bug Hunter in Space

In 1987, Acorn launched the Archimedes home computer. At the time, it was the fastest desktop computer in the world, and at the time, I was fortunate enough to have one to experiment with. The Archimedes was great, but it never really took off commercially. However, it was built around the ARM processor, which Acorn had designed itself when it could not find any existing processors suitable for its 32-bit ambitions. The ARM processor was a masterpiece of simple and intuitive design, and its still around today, with most of the instruction set pretty much unchanged. In fact, you've probably got one in your pocket right now. Its design makes it process very efficiently on low energy intake, and hence it is estimated that about 98% of all mobile phones contain an ARM chip. Over 10 billion ARM chips have been shipped, and they outnumber Intel's long running x86 series of chips by a factor of about 5 to 10. I had learned programming on the BBC Model B , and when we got the A