XPath should be fast. You can reduce the number of XPath calls to one:
doc = etree.fromstring(xml)
btags = doc.xpath('//a/b')
for b in btags:
print b.text
If that is not fast enough, you could try Liza Daly's fast_iter. This has the advantage of not requiring that the entire XML be processed with etree.fromstring
first, and parent nodes are thrown away after the children have been visited. Both of these things help reduce the memory requirements. Below is a modified version of fast_iter
which is more aggressive about removing other elements that are no longer needed.
def fast_iter(context, func, *args, **kwargs):
"""
fast_iter is useful if you need to free memory while iterating through a
very large XML file.
http://lxml.de/parsing.html#modifying-the-tree
Based on Liza Daly's fast_iter
http://www.ibm.com/developerworks/xml/library/x-hiperfparse/
See also http://effbot.org/zone/element-iterparse.htm
"""
for event, elem in context:
func(elem, *args, **kwargs)
# It's safe to call clear() here because no descendants will be
# accessed
elem.clear()
# Also eliminate now-empty references from the root node to elem
for ancestor in elem.xpath('ancestor-or-self::*'):
while ancestor.getprevious() is not None:
del ancestor.getparent()[0]
del context
def process_element(elt):
print(elt.text)
context=etree.iterparse(io.BytesIO(xml), events=('end',), tag='b')
fast_iter(context, process_element)
Liza Daly's article on parsing large XML files may prove useful reading to you too. According to the article, lxml with fast_iter
can be faster than cElementTree
's iterparse
. (See Table 1).
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…