如何使用 R 从 Neurosynth 中抓取表格?
我正在尝试从 Neurosynth 中抓取一些表格数据来处理 fmri 数据。 https://www.neurosynth.org/locations/2_2_2_6/(它不现在关心什么数据。我只想能够从位置页面的关联部分的表格中获取数据)
地使用以下代码抓取了一个简单的维基百科页面:
url =
"https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population"
read_html(url) %>%
html_element("table") %>%
html_table() %>%
我已经成功 绝对没问题。我用我的神经合成器数据尝试同样的事情,即:
neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
read_html(neurosynth) %>%
html_element("table") %>%
html_table()
我得到:
# A tibble: 0 × 4
# … with 4 variables: Title <lgl>, Authors <lgl>, Journal <lgl>, Activations
<lgl>
不起作用。
我玩了一下,并设法使用以下代码获得了我想要的表格标题(z 分数、后验概率等):
neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
neurosynth_page = read_html(neurosynth)
neuro_synth_table = neurosynth_page %>% html_nodes("table#location_analyses_table")
%>%
html_table()
neuro_synth_table
[[1]]
# A tibble: 1 × 5
`` `Individual voxel` `Individual voxel` `Seed-based network` `Seed-based
network`
<chr> <chr> <chr> <chr> <chr>
1 Name z-score Posterior prob. Func. conn. (r) Meta-analytic
coact. (r)
但这是我所能得到的。这是怎么回事?
I am trying to webscrape some table data from neurosynth to do with fmri data.
https://www.neurosynth.org/locations/2_2_2_6/ (it doesn't matter about what data for now. I just want to be able to get data from the table on associations section of locations page)
I have managed to webscrape a simple wikipedia page using the following code:
url =
"https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population"
read_html(url) %>%
html_element("table") %>%
html_table() %>%
worked absolutely fine no problem. I try the same thing with my neurosynth data, i.e.:
neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
read_html(neurosynth) %>%
html_element("table") %>%
html_table()
I get:
# A tibble: 0 × 4
# … with 4 variables: Title <lgl>, Authors <lgl>, Journal <lgl>, Activations
<lgl>
Doesn't work.
I have played around a bit and have managed to get the headings of the table that i want (z-score, posterior prob, etc.) with the following code:
neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
neurosynth_page = read_html(neurosynth)
neuro_synth_table = neurosynth_page %>% html_nodes("table#location_analyses_table")
%>%
html_table()
neuro_synth_table
[[1]]
# A tibble: 1 × 5
`` `Individual voxel` `Individual voxel` `Seed-based network` `Seed-based
network`
<chr> <chr> <chr> <chr> <chr>
1 Name z-score Posterior prob. Func. conn. (r) Meta-analytic
coact. (r)
But that's as far as I can get. What's going on?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您想要的表格是由 javascript 生成的,因此实际上并不存在于您尝试抓取的静态 html 中。 JavaScript 下载一个单独的 json 文件,其中包含表格每个页面的所有数据。
这实际上是个好消息 - 这意味着您可以一次性获取您尝试抓取的所有 134 页数据的条目。我们可以在浏览器的开发者选项卡中找到 json 文件的 url 并使用它。经过一些争论,我们在一个数据框中获得了所有数据。这是完整的表示:
现在我们将数据放在一个漂亮的数据框中:
并且我们拥有所有数据:
创建于 2022 年 2 月 23 日,由 reprex 包 (v2.0.1)
The table you want is generated by javascript, so doesn't actually exist within the static html you are trying to scrape. The javascript downloads a separate json file that contains all the data for every page of the table.
This is actually good news - it means you can get the entries for all 134 pages of the data you are trying to scrape all at once. We can find the json file's url in the developer tab in the browser and use that. With a little bit of wrangling we get all the data in a single data frame. Here's a full reprex:
Now we have the data in a nice data frame:
And we have all the data:
Created on 2022-02-23 by the reprex package (v2.0.1)