在python中解析巨大的xml时lxml内存使用情况
我是一个蟒蛇新手。我正在尝试使用 lxml 解析 python 模块中的一个巨大的 xml 文件。尽管在每个循环结束时清除了元素,但我的内存仍然激增并使应用程序崩溃。我确信我在这里遗漏了一些东西。请帮我弄清楚那是什么。
以下是我正在使用的主要功能 -
from lxml import etree
def parseXml(context,attribList):
for _, element in context:
fieldMap={}
rowList=[]
readAttribs(element,fieldMap,attribList)
readAllChildren(element,fieldMap,attribList)
for row in rowList:
yield row
element.clear()
def readAttribs(element,fieldMap,attribList):
for atrrib in attribList:
fieldMap[attrib]=element.get(attrib,'')
def readAllChildren(element,fieldMap,attribList,rowList):
for childElem in element:
readAttribs(childEleme,fieldMap,attribList)
if len(childElem) > 0:
readAllChildren(childElem,fieldMap,attribList)
rowlist.append(fieldMap.copy())
childElem.clear()
def main():
attribList=['name','age','id']
context=etree.iterparse(fullFilePath, events=("start",))
for row in parseXml(context,attribList)
print row
谢谢!
示例 xml 和嵌套字典 -
<root xmlns='NS'>
<Employee Name="Mr.ZZ" Age="30">
<Experience TotalYears="10" StartDate="2000-01-01" EndDate="2010-12-12">
<Employment id = "1" EndTime="ABC" StartDate="2000-01-01" EndDate="2002-12-12">
<Project Name="ABC_1" Team="4">
</Project>
</Employment>
<Employment id = "2" EndTime="XYZ" StartDate="2003-01-01" EndDate="2010-12-12">
<PromotionStatus>Manager</PromotionStatus>
<Project Name="XYZ_1" Team="7">
<Award>Star Team Member</Award>
</Project>
</Employment>
</Experience>
</Employee>
</root>
ELEMENT_NAME='element_name'
ELEMENTS='elements'
ATTRIBUTES='attributes'
TEXT='text'
xmlDef={ 'namespace' : 'NS',
'content' :
{ ELEMENT_NAME: 'Employee',
ELEMENTS: [{ELEMENT_NAME: 'Experience',
ELEMENTS: [{ELEMENT_NAME: 'Employment',
ELEMENTS: [{
ELEMENT_NAME: 'PromotionStatus',
ELEMENTS: [],
ATTRIBUTES:[],
TEXT:['PromotionStatus']
},
{
ELEMENT_NAME: 'Project',
ELEMENTS: [{
ELEMENT_NAME: 'Award',
ELEMENTS: {},
ATTRIBUTES:[],
TEXT:['Award']
}],
ATTRIBUTES:['Name','Team'],
TEXT:[]
}],
ATTRIBUTES: ['TotalYears','StartDate','EndDate'],
TEXT:[]
}],
ATTRIBUTES: ['TotalYears','StartDate','EndDate'],
TEXT:[]
}],
ATTRIBUTES: ['Name','Age'],
TEXT:[]
}
}
I am a python newbie. I am trying to parse a huge xml file in my python module using lxml. In spite of clearing the elements at the end of each loop, my memory shoots up and crashes the application. I am sure I am missing something here. Please helpme figure out what that is.
Following are main functions I am using -
from lxml import etree
def parseXml(context,attribList):
for _, element in context:
fieldMap={}
rowList=[]
readAttribs(element,fieldMap,attribList)
readAllChildren(element,fieldMap,attribList)
for row in rowList:
yield row
element.clear()
def readAttribs(element,fieldMap,attribList):
for atrrib in attribList:
fieldMap[attrib]=element.get(attrib,'')
def readAllChildren(element,fieldMap,attribList,rowList):
for childElem in element:
readAttribs(childEleme,fieldMap,attribList)
if len(childElem) > 0:
readAllChildren(childElem,fieldMap,attribList)
rowlist.append(fieldMap.copy())
childElem.clear()
def main():
attribList=['name','age','id']
context=etree.iterparse(fullFilePath, events=("start",))
for row in parseXml(context,attribList)
print row
Thanks!!
Example xml and the nested dictionary -
<root xmlns='NS'>
<Employee Name="Mr.ZZ" Age="30">
<Experience TotalYears="10" StartDate="2000-01-01" EndDate="2010-12-12">
<Employment id = "1" EndTime="ABC" StartDate="2000-01-01" EndDate="2002-12-12">
<Project Name="ABC_1" Team="4">
</Project>
</Employment>
<Employment id = "2" EndTime="XYZ" StartDate="2003-01-01" EndDate="2010-12-12">
<PromotionStatus>Manager</PromotionStatus>
<Project Name="XYZ_1" Team="7">
<Award>Star Team Member</Award>
</Project>
</Employment>
</Experience>
</Employee>
</root>
ELEMENT_NAME='element_name'
ELEMENTS='elements'
ATTRIBUTES='attributes'
TEXT='text'
xmlDef={ 'namespace' : 'NS',
'content' :
{ ELEMENT_NAME: 'Employee',
ELEMENTS: [{ELEMENT_NAME: 'Experience',
ELEMENTS: [{ELEMENT_NAME: 'Employment',
ELEMENTS: [{
ELEMENT_NAME: 'PromotionStatus',
ELEMENTS: [],
ATTRIBUTES:[],
TEXT:['PromotionStatus']
},
{
ELEMENT_NAME: 'Project',
ELEMENTS: [{
ELEMENT_NAME: 'Award',
ELEMENTS: {},
ATTRIBUTES:[],
TEXT:['Award']
}],
ATTRIBUTES:['Name','Team'],
TEXT:[]
}],
ATTRIBUTES: ['TotalYears','StartDate','EndDate'],
TEXT:[]
}],
ATTRIBUTES: ['TotalYears','StartDate','EndDate'],
TEXT:[]
}],
ATTRIBUTES: ['Name','Age'],
TEXT:[]
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
欢迎使用 Python 和 Stack Overflow!
看起来您已经遵循了一些关于
lxml
的好建议,尤其是etree.iterparse(..)
,但我认为您的实现从错误的角度解决了问题。 iterparse(..) 的想法是摆脱收集和存储数据,而是在读入标签时对其进行处理。您的 readAllChildren(..) 函数是将所有内容保存到 rowList 中,该列表不断增长以覆盖整个文档树。我做了一些更改来显示发生的情况:使用一些虚拟数据运行:
这有点难以阅读,但您可以看到它在第一遍中从根标记向下爬升整个树,构建
rowList 整个文档中的每个元素。您还会注意到它甚至没有停在那里,因为
element.clear()
调用是在parseXml 中的
,直到第二次迭代(即树中的下一个元素)才会执行。yield
语句之后进行的。 (..)增量处理 FTW
一个简单的解决方法是让 iterparse(..) 完成其工作:迭代解析!以下将提取相同的信息并增量处理它:
在相同的虚拟 XML 上运行:
这将大大提高脚本的速度和内存性能。此外,通过挂钩
'end'
事件,您可以随时清除和删除元素,而不是等到所有子元素都已处理完毕。根据您的数据集,仅处理某些类型的元素可能是个好主意。例如,根元素可能没有多大意义,其他嵌套元素也可能会用大量
{'age': u'', 'id': u'', 'name' 填充您的数据集:u''}
。或者,就 SAX
而言,当我读到“XML”和“低内存”时,我的思绪总是直接跳到 SAX,这是解决此问题的另一种方法。使用内置
xml.sax
模块:您必须根据最适合您的情况来评估这两个选项(如果这是您经常做的事情,也许可以运行几个基准测试)。
请务必跟进事情的进展!
根据后续评论进行编辑
实现上述任一解决方案可能需要对代码的整体结构进行一些更改,但您所拥有的任何内容都应该仍然可行。例如,批量处理“行”,您可以:
Welcome to Python and Stack Overflow!
It looks like you've followed some good advice looking at
lxml
and especiallyetree.iterparse(..)
, but I think your implementation is approaching the problem from the wrong angle. The idea ofiterparse(..)
is to get away from collecting and storing data, and instead processing tags as they get read in. YourreadAllChildren(..)
function is saving everything torowList
, which grows and grows to cover the whole document tree. I made a few changes to show what's going on:Running with some dummy data:
It's a little hard to read but you can see it's climbing the whole tree down from the root tag on the first pass, building up
rowList
for every element in the entire document. You'll also notice it's not even stopping there, since theelement.clear()
call comes after theyield
statment inparseXml(..)
, it doesn't get executed until the second iteration (i.e. the next element in the tree).Incremental processing FTW
A simple fix is to let
iterparse(..)
do its job: parse iteratively! The following will pull the same information and process it incrementally instead:Running on the same dummy XML:
This should greatly improve both the speed and memory performance of your script. Also, by hooking the
'end'
event, you're free to clear and delete elements as you go, rather than waiting until all children have been processed.Depending on your dataset, it might be a good idea to only process certain types of elements. The root element, for one, probably isn't very meaningful, and other nested elements may also fill your dataset with a lot of
{'age': u'', 'id': u'', 'name': u''}
.Or, with SAX
As an aside, when I read "XML" and "low-memory" my mind always jumps straight to SAX, which is another way you could attack this problem. Using the builtin
xml.sax
module:You'll have to evaluate both options based on what works best in your situation (and maybe run a couple benchmarks, if this is something you'll be doing often).
Be sure to follow up with how things work out!
Edit based on follow-up comments
Implementing either of the above solutions may require some changes to the overall structure of your code, but anything you have should still be doable. For instance, processing "rows" in batches, you could have: