Conformance Testing for XML Processors
September 15, 1999
Introduction
Contents |
• Part 1: Conformance Testing for XML Processors |
One indication of XML's success is that a dozen or so implementations of an XML processor exist. These processors, spanning a variety of programming environments, are at the core of a new generation of web tools that are revolutionizing the dynamic generation of HTML and enabling new types of web applications, including business-to-business data messaging. But are all XML processors doing what they are supposed to do? Will the tools built with those processors create rivers of interoperable messages and documents? Will they create islands of data that can only be used with a single set of tools?
This article looks at most of the XML processors available today for use in Java-based XML systems and evaluates how closely they follow the XML 1.0 specification. We will provide you with the hard data from our tests, so you can independently evaluate every claim made in this article and reproduce the results yourself. The results leverage the freely available OASIS XML Conformance Test Suite. While that suite has only been published relatively recently, many of its key components have been well known to the XML community for over a year and a half; they shouldn't come as a big surprise for any implementor. Many of these tests have been used for basic quality testing in a variety of XML processors.
If you want a quick overview of the testing results of each type of parser, use the quick reference table at the beginning of the pages of test results. From those tables you can link to a discussion of each processor, and from from there you can access the complete test reports for each parser (typically 150 KBytes of HTML). Or, read in logical order:
- Learn about the testing that was done, starting with the OASIS suite and adding Java software.
- Test results for the non-validating
XML processors are presented after a quick reference table.
- Ælfred
- DataChannel XML Parser for Java
- IBM's XML4j
- Lark
- Oracle's V2 parser
- Silfide XML Parse (SXP)
- Sun's "Java Project X''
- XP
- Results for the validating
processors are presented after their own quick reference table. Most of them are
built on top of a non-validating processor, and inherit the strengths and weaknesses
of
that core processor.
- IBM's XML4j
- Microsoft's MSXML
- Oracle's V2 parser
- Sun's "Java Project X''
- The conclusion summarizes some basic lessons for both developers of XML software, and users of XML based systems.
Some Background on XML Conformance Testing
Contents |
• Part 1: Conformance Testing for XML Processors |
This article uses the following freely available tools in its evaluation of the conformance of XML processors in Java:
- The OASIS XML Conformance Test Suite, a database of test cases and their supporting documentation, usable with XML processors with APIs in any programming language;
- A XML Conformance Test Driver, written to use that test database to exercise any XML processor supporting the SAX API in Java.
Note that this article uses the term XML Processor, as found in the XML specification. In effect this is what is informally called an XML Parser.
OASIS XML Conformance Test Suite
In order to improve the overall quality of XML processors available in the industry, OASIS (the Organization for Advancement of Structured Information Systems) established a working group to establish an organizational home for a suite of XML conformance tests. This progressed, and produced its first draft in July 1999.
The goals of this working group were initially limited to providing and documenting a set of test cases covering the testable requirements found in the XML 1.0 specification. The work of applying those tests, and evaluating their results, was intentionally left separate. (Vendors can create test drivers specific to their APIs, and use them as part of their quality control processes.) Accordingly, the results provided by the OASIS group were as follows:
- A suite of test cases, XML documents for which any conformant XML processor has a
known result. These test cases came from several sources:
- XMLTEST cases, which have been widely available since early in the life of the XML specification, and focus on non-validating processors.
- Fuji Xerox Japanese XML Tests, available for slightly less time, consisting of two documents with many Japanese characters, in a variety of encodings and syntactic contexts.
- Sun Microsystems Tests, designed to augment XMLTEST and the Fuji Xerox tests by adding coverage of all validity constraints in the XML specification, and by incorporating tests for various customer-reported problems not previously tested.
- OASIS Tests, written by the OASIS group with direct reference to the XML specification. They were intended to provide rigorous coverage of many of the grammar productions.
- A valid XML document acting as a test database, describing each test in terms of: (a) what part of the XML specification it tests; (b) any requirements it places on the processor, such as what types of external entities it must process; (c) an interpretation of the intent of that test; and (d) a test identifier, used to uniquely identify each test.
- Preliminary XSL/T style sheets to process that test database and produce test documentation documenting all of the tests. (One conformed to a recent draft of the XSL/T specification, and another was usable in the Microsoft IE5 browser.)
That test database was designed to be used for at least two specific applications: producing the test documentation described above, and generating an easily interpreted testing report in conjunction with a test driver. (Such testing reports are the focus of this article.)
It is important to understand the structure of the tests provided by OASIS. This structure came from basic principles of software testing, and from the content of the XML specification itself. Read the test documentation for full information, and for a brief summary just understand that:
- Different kinds of processors (validating, non-validating, etc) have slightly different testing criteria, as identified in the testing reports discussed here;
- Tests include positive test documents which the processor must accept (producing at most a warning);
- Tests include negative test documents for which the processor must report an appropriate type of error for the thing being tested by that document (the error must always relate to the case being tested; it must also be continuable for validity errors, and fatal for well-formedness errors);
- Associated with some positive tests are output documents which hold a normalized representation of the data that type of processor is required to report when parsing the input document.
The information in the test report for a given XML processor can accordingly be quite complex.
XML Conformance Test Driver
This article presents the results of tests as driven by a test driver I wrote. It is available under an Open Source license at the URL given above. The driver reads the test database, applies it to the processor under test, and writes out a detailed report. It uses a report template mechanism. Such reports are valuable to developers who can use them to identify bugs that must be fixed without needing to look at perhaps hundreds of test cases. They are also useful to anyone who wants to evaluate a parser for use in an application. Moreover, such reports are invaluable for interpreting negative test results, as described later.
This test driver uses the SAX API to perform its testing. This is the most widely supported XML processor invocation API in Java, as will be appreciated by the fact that this article can look at a dozen such processors! Of necessity, the driver tests SAX conformance as much as it tests conformance to the XML specification, since the SAX API is the way the processor reports the information that the XML specification requires it to report. For example, it is this API that offers the choice between continuing after a validity error or not; and it is this API which exposes the characters, elements, and attributes that a processor reports to an application.
(Note that DOM can't be an API to an XML processor in its current form. One basic issue is the lack of a standard for getting a DOM tree from a given document; there is also no standard way to report parsing errors, or to choose whether to continue processing after a validity error. DOM is also less widely supported than SAX.)
The first draft of the OASIS report has some bugs in its test database. Specifically, the relative paths to the test cases are incorrect, since the file locations changed during the publication process without their URIs being updated), and a few output test cases were omitted. Read the documentation in the test driver for information about how the test database were manually patched to fix those problems for the test runs documented here.
Analyzing XML Conformance Test Reports
Using the test driver and the OASIS tests, a report is generated for each processor. That report summarizes the results for over one thousand test cases. For brevity, only errors and cases needing further interpretation are presented in each report.
The first step in analyzing each processor was to identify any patterns in the clear failures (both positive and negative). Some processors have strong patterns, such as one or two particularly severe bugs which skew the results. Others have more scattershot patterns of bugs; in such cases, I have tried to provide a summary that indicates the types of errors which were most commonly reported.
As a practical matter, I had to put a time limit on this analysis. Those processors with more than about two hundred clear failures got less quality time invested in their analysis. That limit was somewhat arbitrary; enough processors fell within that limit, although it was still a lot of time. Failing more than twenty percent of these tests is at best a marginal grade, and such analysis would be better done by the implementor preparatory to fixing their processor's bugs.
The second step was to interpret the negative tests to understand if any of those passed tests were actually failures ... and adjust the count of ``passed'' tests (down) appropriately. This work can't easily be done without the test interpretation data from the test documentation, since it requires comparing the reason each test failed (its diagnostic) with the reason it was supposed to fail (as shared between the test documentation and this test report). If the reasons don't match, the parser has a problem: the test cases are only supposed to have the one error! Typically, mismatches have been conformance bugs, but they can also be incorrect diagnostics.
This work is not at all straightforward, and is strongly influenced by the quality of diagnostics that a processor provides. Processors with poor diagnostics (such as the classic "syntax error on line 23") can't be effectively analyzed. This analysis is also quite problematic in terms of time. For some (good) processors, there are well over six hundred tests that need such analysis. That is well over the arbitrary limit of two hundred tests mentioned above, and the better processors often need more work than the worse ones. (Note that there is a clear end-user benefit to such work: clear diagnostics make products easier to use, and the work is not likely to be done with bad diagnostics.)
Accordingly, only the most egregious errors of this type were normally identified. If a more thorough analysis was performed, perhaps because the diagnostics were notably good or bad, this is noted.
What about Non-Java XML Processors?
Just because this article looks only at Java processors, don't assume that such testing can only be done for processors in Java!
In fact, other XML processors deserve the same kind of testing (and consequent bug fixing), particularly the widely available processors found in XML-enabled web browsers (Mozilla, IE5) and in desktop environments like Gnome and KDE. It should be straightforward to create a Java wrapper with a SAX API for the Mozilla and Gnome processors (which have event driven APIs) and that may also be possible for the processors in KDE and in IE5. That would let this test driver be used, automatically providing a usable test report format. (And having that SAX API to the native C code might be handy too.) Alternatively, custom test drivers could be written for each of those specific XML processor APIs.
Disclaimers
These results are not in any sense "official", although they will be enlightening nonetheless. The test cases themselves are under review, for content as well as completeness of coverage, as is the driver used to generate these results. Some of the XML specification interpretations associated with the test cases are dependent on errata from the W3C XML team that have not yet been published; when those get published, some of the categorizations of individual tests may need to change.
Comments about those interpretations of the XML specification should be directed to the OASIS XML Conformance working group, or to the W3C in cases where the XML specification is unclear. Comments about the test driver itself should be directed to the author.
Also, not only am I the author of this article and the test driver, but I was the developer of one of the processors (Sun's, although I no longer work there) and was also a significant contributor to the OASIS work. I attribute the good ranking of the processor I wrote to paying careful attention to testing against the XML specification, in the same way I attribute the corresponding ranking of XP to such attention. (The author of XP is the author of the XMLTEST cases, and also contributed to the XML specification itself.) I have attempted to avoid factual bias.
With all of that said, these testing results are still the best that are generally available at this time, and should be an effective guide to anyone needing to choose (or bug-fix) an XML processor in Java.
Contents |
• Part 1: Conformance Testing for XML Processors |
Non-Validating XML Processors
The bulk of the XML processors tested were non-validating ones. For applications where enforcing the rules in a document type declaration is not necessary, a wide variety of choices and design tradeoffs is available. The Open Source parsers are invariably non-validating.
This table provides an alphabetical quick reference to the results of the analysis for non-validating processors:
Processor Name and Version |
Passed Tests | Rating | Summary |
Ælfred
1.2a (July 2, 1998) |
865 |
If you want to trade off some correctness to get a very small parser, look at this one. |
|
DataChannel XML Java Parser
(April 15, 1999) |
327 |
Don't even consider using this package until its bugs are fixed. |
|
IBM XML4j
2.0.15 (August 30, 1999) |
832 |
Two notable bugs are the root cause of most of the errors detected in this processor. |
|
Lark
1.0beta (January 5, 1998) |
923 |
More of the current generation of processors should be as conformant as this one! |
|
Oracle XML Parser
2.0.0.2 (August 11, 1999) |
904 |
This new entry on the processor scene is quite promising, despite rough edges where it should learn from its validating sibling. |
|
Silfide XML Parser (SXP)
0.88 (July 25, 1999) |
731 |
Better choices are available for standalone XML processors. |
|
Sun ``Java Project X''
TR2 (May 21, 1999) |
1065 |
No conformance violations detected. |
|
XP
0.5beta (January 2, 1999) |
1050 |
This processor is all but completely conformant. |
More detailed discussion of each processor is below, in alphabetical order, with links to the complete testing reports.
Ælfred
Processor Name: | Ælfred |
Version: | 1.2a (July 2, 1998) |
Type: | Non-Validating |
DOM Bundled: | No |
Size of JAR File: | 65 KBytes with SAX (uncompressed) |
Download From: | http://www.microstar.com/aelfred.html |
This processor is uniquely light weight; at about 33 KBytes of JAR file size (compressed, and including the SAX interfaces), it was designed for downloading in applets and explicitly traded off conformance for size. While it has not been updated in some time, it is still widely used.
Rating: | |
Full Test Results: | report-aelf.html |
Raw Results: | Passed 865 (of 1065) |
Adjusted Results: | Passed 865 |
This processor rejects a certain number of documents it shouldn't, and isn't clear about why it did so. (Good diagnostics would have cost space, which this processor chose not to spend that way.) There seems to be a pattern where the processor expects a quoted string of some kind, and is surprised by what it found instead.
There are cases where it's clear why those documents were rejected. For example, syntax that looks like a parameter entity reference but is found inside of a comment should be ignored, but isn't. The XML spec itself uses such constructs in its DTD, but its errata haven't yet been updated to address the issue of exactly where parameter entities get expanded and where they don't. (Were there not the example of the XML spec itself, and feedback from the XML editors on this issue, it would seem that this processor was in compliance.)
Character references that would expand to Unicode surrogate pairs are inappropriately rejected. Nobody has any real reason to use such pairs yet, so in practice this isn't a problem. And the data provided as output is also not correct in all cases.
The bulk of this processor's nonconformance lies in the fact that it consciously avoids checking for certain errors, to reduce size and to some extent to increase speed. For example, characters are rarely checked for being in the right ranges, saving code in several locations both to make those checks and to report the associated errors. Certain syntax rules, like "--" being illegal in comments and "]]>" being illegal in normal text, are also ignored.
In short, most of the time if you feed this processor a legal XML document it will parse it without needing many resources. But if you feed it illegal XML, it won't be good about telling you that anything was wrong; or exactly what was wrong.
DataChannel XML Java Parser
Processor Name: | DataChannel XML Java Parser |
Version: | (April 15, 1999) |
Type: | Non-Validating |
DOM Bundled: | Yes |
Size of JAR File: | 448 KBytes (uncompressed) |
Download From: | http://xdev.datachannel.com/ |
This parser is part of a package developed with the assistance of Microsoft, providing a Java implementation of much of the XML manipulation functionality in Internet Explorer 5. While it is freely available, support (such as bug fixes) costs. Validation is available in the package, but not through the SAX API.
Rating: | |
Full Test Results: | report-dcxjp.html |
Raw Results: | Passed 627 (of 1065) |
Adjusted Results: | Passed 327 |
This SAX parser is not currently usable. It rejected almost all documents, due to a simple bug that no other parser has needed to stumble over.
These rejections include all of the direct failures, as well as the huge number of "false passes" on the negative tests ... almost three hundred were caused by this error alone. (Few other parsers had that many failures at all; none had as many "false passes".) The exception is a TokenizerException like the following:
Unrecognized token following a '<!' sequence! (line 1, position 4, file:/db/xml/xmlconf/xmltest/valid/sa/093.xml) |
For the record, the document in question is this innocuous snippet, primarily useful as an output test (the content of the <doc> element has CRLF and CR line ends, which should normalize to three LF characters):
<!DOCTYPE doc [ <!ELEMENT doc (#PCDATA)> ]> <doc> </doc> |
Many of the documents which this processor accepted were documents which contained illegal XML characters, and so they should have caused fatal errors to be reported.
Speaking as a systems developer, it's hard for me to believe that this package was released without knowing about these bugs, and is harder to understand why it wasn't fixed in the months since it was first released. If DataChannel wasn't using the XMLTEST cases all along, it should have been. In any case, they were definitively informed about this bug in the first week of August, and it remains unfixed at this writing.
IBM XML4j
Processor Name: | IBM XML4j |
Version: | 2.0.15 (August 30, 1999) |
Type: | Non-Validating |
DOM Bundled: | Yes |
Size of JAR File: | 722 KBytes (uncompressed) |
Download From: | http://www.alphaworks.ibm.com/ |
IBM's package includes several processor configurations, including validating and DOM-oriented parsers, and it works well with other XML software provided by the company. It gets regular updates. As "alphaworks" software, it has no guarantees. Commercial usage permission can be granted.
Rating: | |
Full Test Results: | report-xml4j-nv.html |
Raw Results: | Passed 902 (of 1065) |
Adjusted Results: | Passed 832 |
What sticks out the most about this processor is that just two clear cases of internal errors seem to dominate the test failures, making it reject many well formed documents which it should have accepted. These also mask other errors that the processor should have reported. (The same symptoms exists in the validating processor, which shares the same core engine.)
Thankfully, those internal errors don't show up often enough to keep this processor from correctly handling the bulk of the test suite. If they were fixed, this processor might do quite well on a conformance evaluation, rather than being below the median.
Beyond those two bugs, a few other problems also turned up. This processor seems to have some problems reading UTF-16 text. In some cases it rejects XML characters that it should accept. That's significant since the result was rejecting many of the XML documents which used non-English characters.
What were those bugs? Well over half of the falsely rejected documents (and significant numbers of the incorrectly rejected ones) are cases where the processor
- expects an end tag inappropriately ... often labeled as "null", a strong indication of an internal error (null pointer) particularly since that tag name wasn't used; or labeled with the target of a processing instruction, another such indication.
- inappropriately reports a recursive entity expansion ... it appears that this diagnostic is produced in cases of correct entity use, as well as situations that don't involve entities at all.
These were reported to IBM when this processor was first released, and it is not clear whether any of the subsequent releases have reduced the frequency of these false errors.
Lark
Processor Name: | Lark |
Version: | 1.0beta (January 5, 1998) |
Type: | Non-Validating |
DOM Bundled: | No |
Size of JAR File: | 135 Kbytes (uncompressed, with SAX classes) |
Download From: | http://www.textuality.com/Lark/ |
Lark is one of the older XML processors still in use. It was written by Tim Bray, one of the editors of the XML specification, in conjunction with that specification, partly to establish that the specification was in fact implementable. It it is not actively being maintained.
Rating: | |
Full Test Results: | report-lark.html |
Raw Results: | Passed 923 (of 1065) |
Adjusted Results: | Passed 923 |
This processor rejects a few too many documents which it should accept, and doesn't produce the correct output in a number of cases. However, it is quite good at rejecting malformed documents for the correct reasons.
Quite a lot of the documents that this rejects have XML declarations which aren't quite what the processor expects, in some cases seemingly due to having standalone declarations. Others use some name characters which aren't accepted. There appear to be a declaration ordering constraint imposed by the processor, and difficulties handling conditional sections. Character references that expand to surrogate pairs are not accepted.
Oracle XML Parser
Processor Name: | Oracle XML Parser |
Version: | 2.0.0.2 (August 11, 1999) |
Type: | Non-Validating |
DOM Bundled: | Yes |
Size of JAR File: | 556 KBytes (uncompressed) |
Download From: | http://www.oracle.com/xml/ |
Oracle has a new version of their package, which appears promising. It includes XSL/T support, and has a compact API that supports both validating and non-validating processing. At this time, this implementation is not licensed for commercial use.
As this article went to press, a new version of the Oracle XML Parser for Java, v2.0.2, was released. In addition to some bug fixes in the XML processor, which do not seem to affect the overall ratings for these processors, this version includes support for the August XSL/T working draft.
Rating: | |
Full Test Results: | report-oracle-nv.html |
Raw Results: | Passed 904 (of 1065) |
Adjusted Results: | Passed 904 |
This processor was quite good about not rejecting documents it should have accepted, but needs some work yet on reporting the correct data and on rejecting some illegal documents. Its diagnostics made the task of analyzing its test results easy; I was able to analyse the negative test results much more thoroughly than for most other processors. That will in turn make life easier for the users of applications built with this processor.
With respect to the output from this processor, there were a handful of cases where incorrect data was reported. From analyzing a subset of these cases, I noticed:
- Second declarations of attributes not being ignored;
- Incorrect whitespace treatment in attribute values and in entity expansions;
- Misinterpretation of multibyte UTF-8 input characters;
- SAX DTDHandler callbacks not being invoked.
There appear to be some problems with character set handling. Unicode surrogate pairs are not handled correctly, and some text encoded in UTF-16 was incorrectly rejected. Another issue with character handling is that some characters which should cause fatal errors (such as form feeds, misplaced byte order marks, and some characters in PUBLIC identifiers) are permitted.
Perhaps the most worrisome case of wrongly accepting a document was accepting a document which omitted an end tag. This processor even accepts SGML tag minimization and exception specifications in its element type declarations. Even if this were intentional, it is a substantial bug to enable this by default on a processor calling itself an "XML" processor. The acceptance of such SGML syntax is one of the more notable patterns of errors in this processor. It is also puzzling since its validating sibling handled such syntax correctly (rejecting it with fatal errors).
There were a variety of cases where array indexing exceptions were reported, or where certain syntax was incorrectly accepted. Many of those are attributable to this being an early release.
There are some issues with the reporting of errors through SAX; in many cases, the processor doesn't pass the correct exception object through, but instead substitutes it for a different one. This can affect application code.
Silfide XML Parser (SXP)
Processor Name: | Silfide XML Parser (SXP) |
Version: | 0.88 (July 25, 1999) |
Type: | Non-Validating |
DOM Bundled: | Yes |
Size of JAR File: | 279 KBytes (or 595 KBytes uncompressed) |
Download From: | http://www.loria.fr/projects/XSilfide/ |
SXP is part of the "Silfide" project, a client/server based environment for distributing language resources built with XML and Java. It is currently a prototype, and is not available for commercial use. Silfide incorporates a 100% Pure Java web server, and SXP implements early drafts of XPointer and XLink.
Rating: | |
Full Test Results: | report-sxp.html |
Raw Results: | Passed 761 (of 1065) |
Adjusted Results: | Passed 731 |
This component of the Silfide system appears not to have received as much attention as other parts of it. Also, it thrashes on systems with only 64 MBytes of physical memory.
Support for UTF-16 and UTF-8 encodings is not strong; the encodings don't appear to be autodetected, so their data is handled incorrectly. While many other errors are visible, the diagnostics are not often clear about what was expected.
Output tests seem to fail in large part because the processor reports character data outside the context of an element, where only markup exists.
A number of illegal characters were accepted, including malformed surrogate pairs and out-of-range characters, which should have been rejected.
Sun ``Java Project X''
Processor Name: | Sun ``Java Project X'' |
Version: | TR2 (May 21, 1999) |
Type: | Non-Validating |
DOM Bundled: | Yes |
Size of JAR File: | 132 KBytes (or 246 KBytes uncompressed) |
Download From: | http://java.sun.com/products/xml |
This package includes validating and nonvalidating parsers, which may optionaly be connected to a DOM implementation. Sun plans to turn this processor into the reference implementation of a Java Standard Extension for XML, with APIs that are yet to be specified (beyond inclusion of SAX and DOM). Commercial use is permitted, and this package has been relatively stable for some time.
Rating: | |
Full Test Results: | report-sun-nv.html |
Raw Results: | Passed 1065 (of 1065) |
Adjusted Results: | Passed 1065 |
This processor reported no conformance errors. That was a design goal of the processor.
I analysed the negative results when I worked at Sun, and believe that every diagnostic reports the correct error. (This is the only parser that I can report I have carefully examined for that issue.)
XP
Processor Name: | XP |
Version: | 0.5beta (January 2, 1999) |
Type: | Non-Validating |
DOM Bundled: | No |
Size of JAR File: | 166 KBytes (uncompressed, without SAX classes) |
Download From: | http://www.jclark.com/xml/xp/ |
This processor was written by James Clark, who served as technical lead for the XML spec and as editor of the XSL/T specification. (He's also written the most widely used SGML implementation, and done many other things in this community.) It is available for commercial use.
Rating: | |
Full Test Results: | report-xp.html |
Raw Results: | Passed 1050 (of 1065) |
Adjusted Results: | Passed 1050 |
This is one of the most conformant XML processors available. In contrast to the variety of problems shown by most other processors, this test suite identifies only three categories of specification violation in XP:
- Violations of two validity constraints relating to nesting of parameter entities are treated as fatal errors, instead of nonfatal ones as a validating processor would, or even as non-errors as most non-validating processors do. The author of this processor has communicated to me that he does not see fixing these as necessary, and I can concur. (These account for the bulk of the reported errors.)
- Treatment of parameter entities does not strictly match what is expected by the test suite. These relate to two interpretation questions which have been raised to the W3C but not answered through the errata process for the XML specification.
- Some output violations relate to reporting notation declarations and normalizing attributes.
In short, a reasonable design choice (based on what I think of as a specification issue), some specification issues, and what appears to be two minor problems. Wouldn't it be nice if all software was this close to its specification!
Validating XML Processors
Contents |
• Part 1: Conformance Testing for XML Processors |
Not many validating XML processors are available at this time, and most of them are available with a non-validating sibling. The suppliers are all commercial; there are no Open Source validating processors supporting the SAX API, so far as I am currently aware. Applications needing to enforce document type declarations do have options available in other programming languages. Notably, C/C++ packages are freely available, sometimes with SGML support.
This table provides an alphabetical quick reference to the results of the analysis for validating processors:
Processor Name and Version |
Passed Tests | Rating | Summary |
IBM XML4j
2.0.15 (August 30, 1999) |
832 |
This has the problems of its non-validating sibling, and does not permit validity errors to be continued. |
|
Microsoft MSXML in Java
JVM 5.00.3186 (August 24, 1999) |
615 |
It's curious that this was bundled into Microsoft's Java VM without fixing its well known conformance bugs. Avoid using it. |
|
Oracle XML Parser
2.0.0.2 (August 11, 1999) |
871 |
If this just permitted continuation of validity errors, it would be a top contender. |
|
Sun ``Java Project X''
TR2 (May 21, 1999) |
1065 |
No conformance violations detected. |
More detailed discussion of each processor is below, in alphabetical order, with links to the complete testing reports.
IBM XML4j
Processor Name: | IBM XML4j |
Version: | 2.0.15 (August 30, 1999) |
Type: | Validating |
DOM Bundled: | Yes |
Size of JAR File: | 722 KBytes (uncompressed) |
Download From: | http://www.alphaworks.ibm.com/ |
This is the validating version of IBM's processor. See the coverage of the non-validating processor for more details.
Rating: | |
Full Test Results: | report-xml4j-val.html |
Raw Results: | Passed 902 (of 1065) |
Adjusted Results: | Passed 832 |
The same problems that show up in the non-validating processor also show up in the validating one ... in fact, the processor appears to be doing exactly the same thing in both cases! (I confess this discovery was quite a surprise to me; it may be that this version of the IBM processor is a regression from earlier releases in this respect.)
No validity errors were reported as such; all the invalid documents caused incorrect reports of fatal errors.
Microsoft MSXML in Java
Processor Name: | Microsoft MSXML |
Version: | JVM 5.00.3186 (August 24, 1999) |
Type: | Validating |
DOM Bundled: | No |
Size of JAR File: | N/A (bundled with JVM) |
Download From: | http://www.microsoft.com/java/ |
Note that although this parser was originally called MSXML, Microsoft currently uses that term exclusively for its IE5 COM parser ("MSXML.DLL"). The more recent name for the Java parser is the "Microsoft XML Parser in Java". Please do not interpret these results as reflecting conformance for the C parser found in the Internet Explorer 5 web browser.
The MSXML package was originally intended to provide XML support for Internet Explorer 4 users. It was recently bundled with Microsoft's latest version (build 3186) of their Java Virtual Machine and SDK 3.2. A SAX driver is separately available. None of the standard programming interfaces (SAX, DOM) are bundled.
Rating: | |
Full Test Results: | report-msxml.html |
Raw Results: | Passed 648 (of 1065) |
Adjusted Results: | Passed 615 |
This processor needs a separate SAX driver, since Microsoft has not yet offered support for the SAX API. The processor rejects a substantial number of documents that it should accept, producing fatal errors:
- The processor misreports validity errors as fatal errors.
- It ignores many validity errors.
- A wide range of common grammatical constructs are rejected, such as
- character and entity references in attribute values
- some processing instructions
- attribute names such as "xml:space"
- conditional sections
- Not all XML names characters are accepted; for example, some Japanese characters were rejected in names.
- In valid documents, some declarations were both ignored and reported as missing (validity violations, misreported as fatal errors). These may be inappropriate expectations of a particular declaration ordering.
In addition, this processor has entered infinite loops when asked to parse some documents. This has been observed with UTF-16 input text (which sometimes produces less drastic errors) as well as with some numeric character references. Such errors are quite dangerous.
Many output tests failed; more than seems usual.
As for documents which should have been rejected but were in fact accepted, there were many of those also:
- Various illegal processing instructions were accepted, including the <?XML ...?> style ones, facilitating islands of Microsoft-only "pseudo-XML" which have for a long time been troublesome;
- Many characters disallowed by XML were accepted by this processor, such as many control codes and the 'escape' character. This includes references to such characters, even when they could not be represented in Unicode (even with surrogate pairs), and UTF-8 encodings of such unrepresentable characters.
- Characters that should have been disallowed in PUBLIC identifiers were allowed.
- SGML style comments were accepted
- Text with embedded ']]>' was accepted
- Constructs like <element att="1" att="2"/> were not rejected;
- Illegal DTD syntax was accepted
Support for multiple text encodings seems weak; documents declared as being encoded in "UTF-16" were inappropriately rejected. Japanese encodings were neither rejected nor handled consistently.
This test harness shows that when a SAX ErrorHandler callback is used to report an exception, that exception will not be passed back to the application through the Parser.parse() call. This appears to be a driver issue with a simple fix.
Oracle XML Parser
Processor Name: | Oracle XML Parser |
Version: | 2.0.0.2 (August 11, 1999) |
Type: | Validating |
DOM Bundled: | Yes |
Size of JAR File: | 556 KBytes (uncompressed) |
Download From: | http://www.oracle.com/xml/ |
This is the validating mode of Oracle's new processor. See the coverage of that non-validating processor.
Rating: | |
Full Test Results: | report-oracle-val.html |
Raw Results: | Passed 871 (of 1065) |
Adjusted Results: | Passed 871 |
This stumbled on accepting a few valid documents, and there were some difficulties handling NMTOKENS attribute lists and handling a mixed content specification.
This shared some of the problems that its non-validating sibling had with reading UTF-16 and multibyte UTF-8 characters. Similarly, it also had problems with names which actually tried to exercise the variety of name characters permitted by the XML specifications.
The output was much more correct than the output from its non-validating sibling. That's a bit puzzling, but it does suggest that the core engine needs only minor tweaks to make sure they're both equally correct.
Also of note is the fact that this validating processor accepted none of the SGML-isms that the non-validating one allowed in its input DTD syntax. Again, the non-validating processor should be acting more like the validating one.
Other than rejecting documents it shouldn't, the problems with this processor mostly related to validity violations that were not reported at all, including:
- Improper nesting of PEs in declarations (neither of the nesting constraints seems to be tested);
- Documents improperly declared as standalone;
- Several cases of content model violations (such as for an incorrect number of elements, and some cases of data inside an EMPTY element);
- Illegal attribute value defaults were accepted for ENTITY/ENTITIES attributes.
- A reference to an undeclared notation was ignored.
Internal errors were reported in various cases when validating illegal documents. These included array bounds exceptions (e.g. with NMTOKENS attributes) and null pointer exceptions working with IDREF/IDREFS values.
Significantly, none of the validity errors were continuable; some were correctly reported as non-fatal through SAX, but then the processor refused to continue processing. (As noted above, some were incorrectly reported as fatal errors in the first place.)
Sun ``Java Project X''
Processor Name: | Sun ``Java Project X'' |
Version: | TR2 (May 21, 1999) |
Type: | Validating |
DOM Bundled: | Yes |
Size of JAR File: | 132 KBytes (or 246 KBytes uncompressed) |
Download From: | http://java.sun.com/products/xml |
This is the validating mode of the non-validating processor presented elsewhere.
Rating: | |
Full Test Results: | report-sun-val.html |
Raw Results: | Passed 1065 (of 1065) |
Adjusted Results: | Passed 1065 |
This processor reported no conformance errors. That was a design goal of the processor.
I analysed the negative results when I worked at Sun, and believe that every diagnostic reports the correct error. (This is the only parser that I can report I have carefully examined for that issue.)
Summary
Contents |
• Part 1: Conformance Testing for XML Processors |
Most of these conformance tests (the XMLTEST subset) have been available for quite some time. So you might expect that at this stage of the XML life cycle most processors would conform quite well to the specification. Surprisingly, that is not the case. Few processors passed all the XMLTEST cases, much less the whole OASIS suite. The class median on this open book test was about eighty percent, which suggests that many implementors just haven't applied any real effort to getting conformance right.
Common Types of Errors
A few types of errors are common in today's XML processors:
- Sloppy programming is sometimes visible, where one or two basic errors cause many problems parsing documents. One could expect that of early stage implementations, but those are not the ones exhibiting such problems most strongly.
- Many processors reject legal documents. This can be extremely troublesome in terms
of
interoperability.
- Several have problems handling UTF-16 encodings. Support for such encodings is available from the Java platform; the problems shown by the processors may relate to faulty detection of documents using such sixteen bit characters.
- A number had problems handling element and tag names with characters outside the ASCII range. This was evident even though this version of the OASIS tests doesn't fully cover those two pages of the XML specification; thorough coverage could involve up to 128,000 tests (one for each of 64,000 Unicode values in initial and subsequent characters, used in various types of XML identifiers).
- Handling of XML characters that require use of Unicode surrogate pairs is as yet inconsistent. Given the lack of any current real-world need for such characters, this is not a surprise -- only conformance testing is currently likely to turn up such problems. This is expected to change in the next few years, however.
- Many processors accept documents they should reject:
- They do not appear to pay attention to rules that restrict what characters may exist in content, or in PUBLIC identifiers. While that latter is probably just a common oversight, the former might be an attempt at false efficiency. (I have disabled such checks in some processors, and have found the difference in speed to be hard to observe.)
- Parameter entity handling is often troublesome. This is an area where the XML specification does not completely describe the intended behavior, while the errata have not yet identified which parts of a DTD are not subject to parameter entity expansion, so some confusion here is to be expected.
- With respect to validating processors, it was an unpleasant shock to discover that only one of them actually meets a basic requirement of the XML specification, that validity errors be continuable.
- The validity constraints related to nesting of parameter entities get inconsistent treatment. One non-validating processor treats them as well formedness constraints; one validating one ignores them entirely. As a systems designer, I see no benefit to the confusion that could arise from expecting validating and non-validating processors to differ here -- and see a drawback, in terms of code complexity (which in this case requires tracking information which is otherwise irrelevant). It'd be preferable to have these two constraints be gone, or be well formedness constraints.
- Output tests caused many trouble reports. It can be tricky to ensure that line ends and whitespace are handled correctly, though normalization of attribute values and PUBLIC identifiers should be easier.
SAX conformance was generally reasonable, although a few of the processors (or their drivers) had problems with error reporting or otherwise providing the correct data to the application. The fact that a less conformant SAX processor can be substituted for a more conformant one is truly advantageous when conformance is a goal.
Some Observations
Some XML processors are part of libraries where they are tightly integrated with packages for other XML software components, such as XSL/T, XPath, and of course DOM. Such integration can preclude customers from using conformant processors, since such libraries are often set up to rely on proprietary processor APIs rather than using only standard SAX functionality. It would be better to avoid that sort of coupling, which is mostly necessary (for now) when the contents of a DTD need to be accessed.
It is an interesting reflection on the current debate about "Open Source" versus proprietary software that while some of the most conformant processors are commercially controlled and offered without an Open Source license, equally conformant Open Source processors have been available for much longer. The exception is in the case of validating processors; no Open Source validating XML processors have yet been provided.
The case of Ælfred is also instructive. That processor made some specific tradeoffs against conformance, in favor of small size. Given that, it is curious that it is more or less the median processor represented in this testing. None of the other processors claim to have made such tradoffs; why is Ælfred not relatively worse?
I feel I should touch on performance impacts here. While it is surely possible to speed up by a few instructions here and there by being loose with conformance, it is also true that the most highly conformant XML processors are also among the fastest ones. There is no clear need to trade off those aspects of quality against each other.
To my understanding, none of the processors described here have significantly improved their conformance levels since their initial releases. I would much like to see this dynamic be changed!
Conclusion
I hope that both users and providers of XML processors will take these testing results to heart. Users should build systems using only conformant processors, and should ask their providers to address all conformance shortcomings in their software. Those providers should be addressing such problems proactively, and fixing customer-reported conformance bugs. This is particularly important for ``integrated'' packages, where using higher level APIs imposes the use of a particular processor that may not be very conformant. Such higher level APIs could generally be provided using any processor supporting the SAX API.
Widespread and close conformance to core web standards, such as XML, will help the web community avoid recreating the pain coming from inconsistent support of standards like CSS 1 and HTML 4 in today's systems. That chaos is an ongoing drain on the budgets of web site providers, and causes a lower quality web experience than users deserve. It is in everyone's interest to keep that from repeating with XML. Only now, near the beginning of the XML adoption curve, is there a real chance to prevent such problems from taking root.