如何从 R 访问维基百科?

发布于 2024-11-09 07:38:47 字数 80 浏览 9 评论 0原文

R 是否有任何包允许查询维基百科(最有可能使用 Mediawiki API)来获取与此类查询相关的可用文章列表,以及导入选定的文章以进行文本挖掘?

Is there any package for R that allows querying Wikipedia (most probably using Mediawiki API) to get list of available articles relevant to such query, as well as import selected articles for text mining?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

遗忘曾经 2024-11-16 07:38:47

WikipediR,'R 中的 MediaWiki API 包装器'

library(devtools)
install_github("Ironholds/WikipediR")
library(WikipediR)

它包括这些功能:

ls("package:WikipediR")
 [1] "wiki_catpages"      "wiki_con"           "wiki_diff"          "wiki_page"         
 [5] "wiki_pagecats"      "wiki_recentchanges" "wiki_revision"      "wiki_timestamp"    
 [9] "wiki_usercontribs"  "wiki_userinfo"  

这里正在使用,获取一堆用户的贡献详细信息和用户详细信息:

library(RCurl)
library(XML)

# scrape page to get usernames of users with highest numbers of edits
top_editors_page <- "http://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits"
top_editors_table <- readHTMLTable(top_editors_page)
very_top_editors <- as.character(top_editors_table[[3]][1:5,]$User)

# setup connection to wikimedia project 
con <- wiki_con("en", project = c("wikipedia"))

# connect to API and get last 50 edits per user
user_data <- lapply(very_top_editors,  function(i) wiki_usercontribs(con, i) )
# and get information about the users (registration date, gender, editcount, etc)
user_info <- lapply(very_top_editors,  function(i) wiki_userinfo(con, i) )

There is WikipediR, 'A MediaWiki API wrapper in R'

library(devtools)
install_github("Ironholds/WikipediR")
library(WikipediR)

It includes these functions:

ls("package:WikipediR")
 [1] "wiki_catpages"      "wiki_con"           "wiki_diff"          "wiki_page"         
 [5] "wiki_pagecats"      "wiki_recentchanges" "wiki_revision"      "wiki_timestamp"    
 [9] "wiki_usercontribs"  "wiki_userinfo"  

Here it is in use, getting the contribution details and user details for a bunch of users:

library(RCurl)
library(XML)

# scrape page to get usernames of users with highest numbers of edits
top_editors_page <- "http://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits"
top_editors_table <- readHTMLTable(top_editors_page)
very_top_editors <- as.character(top_editors_table[[3]][1:5,]$User)

# setup connection to wikimedia project 
con <- wiki_con("en", project = c("wikipedia"))

# connect to API and get last 50 edits per user
user_data <- lapply(very_top_editors,  function(i) wiki_usercontribs(con, i) )
# and get information about the users (registration date, gender, editcount, etc)
user_info <- lapply(very_top_editors,  function(i) wiki_userinfo(con, i) )
李不 2024-11-16 07:38:47

使用 RCurl 包来检索信息,使用 XML 或 RJSONIO 包来解析响应。

如果您使用代理,请设置您的选项。

opts <- list(
  proxy = "136.233.91.120", 
  proxyusername = "mydomain\\myusername", 
  proxypassword = 'whatever', 
  proxyport = 8080
)

使用 getForm 函数访问API

search_example <- getForm(
  "http://en.wikipedia.org/w/api.php", 
  action  = "opensearch", 
  search  = "Te", 
  format  = "json",
  .opts   = opts
)

解析结果。

fromJSON(rawToChar(search_example))

Use the RCurl package for retreiving info, and the XML or RJSONIO packages for parsing the response.

If you are behind a proxy, set your options.

opts <- list(
  proxy = "136.233.91.120", 
  proxyusername = "mydomain\\myusername", 
  proxypassword = 'whatever', 
  proxyport = 8080
)

Use the getForm function to access the API.

search_example <- getForm(
  "http://en.wikipedia.org/w/api.php", 
  action  = "opensearch", 
  search  = "Te", 
  format  = "json",
  .opts   = opts
)

Parse the results.

fromJSON(rawToChar(search_example))
叫思念不要吵 2024-11-16 07:38:47

一个新的伟大可能性是 wikifacts 包(在 CRAN 上):

library(wikifacts)
wiki_define('R (programming language)')
## R (programming language) 
## "R is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. Polls, data mining surveys, and studies of scholarly literature databases show substantial increases in popularity; as of April 2021, R ranks 16th in the TIOBE index, a measure of popularity of programming languages.The official R software environment is a GNU package.\nIt is written primarily in C, Fortran, and R itself (thus, it is partially self-hosting) and is freely available under the GNU General Public License. Pre-compiled executables are provided for various operating systems."

A new great possibility is the wikifacts package (on CRAN):

library(wikifacts)
wiki_define('R (programming language)')
## R (programming language) 
## "R is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. Polls, data mining surveys, and studies of scholarly literature databases show substantial increases in popularity; as of April 2021, R ranks 16th in the TIOBE index, a measure of popularity of programming languages.The official R software environment is a GNU package.\nIt is written primarily in C, Fortran, and R itself (thus, it is partially self-hosting) and is freely available under the GNU General Public License. Pre-compiled executables are provided for various operating systems."
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文