|
Posted by Anze on 01/21/08 14:57
Hi!
I was trying to get the answer on the net but I only found partial answers -
I hope someone can help me out...
I am looking for an XML parser that would:
1) validate XML before doing anything
2) be fast
3) allow parsing of big XML documents (small memory footprint)
I am implementing a system that will fetch XMLs from multiple sites across
the Internet and enter the data to the local database. Since the
connections (and vendor implementators ;) are unreliable I need to be sure
that the XML is well formed before doing anything.
The XMLs can be very big and the hosts number is also big, which means he
parser needs to be fast and memory efficient.
Options I have found:
a) DOM - not suitable because of 3)
b) SAX
c) pull parsers
What are the differences between SAX and pull parsers performance wise? I
have already implemented both types in the past, but never on the same
project so I could compare them.
I don't really care about difficulty of programming them as they are both
quite easy to work with once you understand them. Also, I will only be
parsing one XML at a time, so there is no advantage of pull parser here.
I would appreciate any thoughts on performance of both XML parser classes,
and especially some pointers about which parser would be the most
efficient... What would you use?
Thank you!
Kind regards,
Anze
Navigation:
[Reply to this message]
|