|
Posted by Daniel Ennis on 01/25/08 12:04
Anze wrote:
> Hi!
>
> I was trying to get the answer on the net but I only found partial answers -
> I hope someone can help me out...
>
> I am looking for an XML parser that would:
> 1) validate XML before doing anything
> 2) be fast
> 3) allow parsing of big XML documents (small memory footprint)
>
> I am implementing a system that will fetch XMLs from multiple sites across
> the Internet and enter the data to the local database. Since the
> connections (and vendor implementators ;) are unreliable I need to be sure
> that the XML is well formed before doing anything.
> The XMLs can be very big and the hosts number is also big, which means he
> parser needs to be fast and memory efficient.
>
> Options I have found:
> a) DOM - not suitable because of 3)
> b) SAX
> c) pull parsers
>
> What are the differences between SAX and pull parsers performance wise? I
> have already implemented both types in the past, but never on the same
> project so I could compare them.
>
> I don't really care about difficulty of programming them as they are both
> quite easy to work with once you understand them. Also, I will only be
> parsing one XML at a time, so there is no advantage of pull parser here.
>
> I would appreciate any thoughts on performance of both XML parser classes,
> and especially some pointers about which parser would be the most
> efficient... What would you use?
>
> Thank you!
>
> Kind regards,
>
> Anze
Have you looked into SimpleXML? Its a native built in XML Parser in
PHP5, so of course the performance is going to be alot faster.
Not sure about how it handles large documents though. It essentially
builds it into an array like $root->cds->cd[1]->artist->value;
--
Daniel Ennis
faNetworks.net - Quality Web Hosting and Ventrilo Services
System Administrator / Web Developer
PHP Developer for 6 years
daniel@fanetworks.net
Navigation:
[Reply to this message]
|