When you say Wikipedia Data Extraction, I assume you're referring to the software DBpedia uses to transform Wikipedia XML dumps into the DBpedia data dumps? Have you considered using the DBpedia dumps themselves?
Tools to extract information from web pages is a very broad space. What kind of information do you want to extract? Is it from semi-structured (e.g. tables), or unstructured text (e.g. prose). Are you interested in metadata such as page title and author, or lower-level concepts such as named entities?
(I would have left these clarifying questions on the question but my account level doesn't allow it)
发布评论
评论(1)
当您说维基百科数据提取时,我假设您指的是 DBpedia 用于转换维基百科的软件 XML 转储到 DBpedia 数据转储?您是否考虑过使用 DBpedia 转储本身?
从网页中提取信息的工具是一个非常广阔的领域。您想提取什么样的信息?是来自半结构化(例如表格)还是非结构化文本(例如散文)。您是否对页面标题和作者等元数据或命名实体等较低级别的概念感兴趣?
(我本来想在问题上留下这些澄清问题,但我的帐户级别不允许)
When you say Wikipedia Data Extraction, I assume you're referring to the software DBpedia uses to transform Wikipedia XML dumps into the DBpedia data dumps? Have you considered using the DBpedia dumps themselves?
Tools to extract information from web pages is a very broad space. What kind of information do you want to extract? Is it from semi-structured (e.g. tables), or unstructured text (e.g. prose). Are you interested in metadata such as page title and author, or lower-level concepts such as named entities?
(I would have left these clarifying questions on the question but my account level doesn't allow it)