如何使用 R 从 Neurosynth 中抓取表格?

发布于 2025-01-09 14:35:53 字数 1585 浏览 0 评论 0原文

我正在尝试从 Neurosynth 中抓取一些表格数据来处理 fmri 数据。 https://www.neurosynth.org/locations/2_2_2_6/(它不现在关心什么数据。我只想能够从位置页面的关联部分的表格中获取数据)

地使用以下代码抓取了一个简单的维基百科页面:

          url = 
"https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population"
          read_html(url) %>%
            html_element("table") %>%
            html_table() %>%
           

我已经成功 绝对没问题。我用我的神经合成器数据尝试同样的事情,即:

             neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
             read_html(neurosynth) %>%
               html_element("table") %>%
               html_table() 

我得到:

# A tibble: 0 × 4 
# … with 4 variables: Title <lgl>, Authors <lgl>, Journal <lgl>, Activations 
<lgl>

不起作用。

我玩了一下,并设法使用以下代码获得了我想要的表格标题(z 分数、后验概率等):

neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
neurosynth_page = read_html(neurosynth)
neuro_synth_table = neurosynth_page %>% html_nodes("table#location_analyses_table")  
 %>%
 html_table() 
neuro_synth_table

[[1]]
# A tibble: 1 × 5
 ``    `Individual voxel` `Individual voxel` `Seed-based network` `Seed-based 
network`    
<chr> <chr>              <chr>              <chr>                <chr>                   
 1 Name  z-score            Posterior prob.    Func. conn. (r)      Meta-analytic 
coact. (r)

但这是我所能得到的。这是怎么回事?

I am trying to webscrape some table data from neurosynth to do with fmri data.
https://www.neurosynth.org/locations/2_2_2_6/ (it doesn't matter about what data for now. I just want to be able to get data from the table on associations section of locations page)

I have managed to webscrape a simple wikipedia page using the following code:

          url = 
"https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population"
          read_html(url) %>%
            html_element("table") %>%
            html_table() %>%
           

worked absolutely fine no problem. I try the same thing with my neurosynth data, i.e.:

             neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
             read_html(neurosynth) %>%
               html_element("table") %>%
               html_table() 

I get:

# A tibble: 0 × 4 
# … with 4 variables: Title <lgl>, Authors <lgl>, Journal <lgl>, Activations 
<lgl>

Doesn't work.

I have played around a bit and have managed to get the headings of the table that i want (z-score, posterior prob, etc.) with the following code:

neurosynth_link = "https://www.neurosynth.org/locations/2_2_2_6/"
neurosynth_page = read_html(neurosynth)
neuro_synth_table = neurosynth_page %>% html_nodes("table#location_analyses_table")  
 %>%
 html_table() 
neuro_synth_table

[[1]]
# A tibble: 1 × 5
 ``    `Individual voxel` `Individual voxel` `Seed-based network` `Seed-based 
network`    
<chr> <chr>              <chr>              <chr>                <chr>                   
 1 Name  z-score            Posterior prob.    Func. conn. (r)      Meta-analytic 
coact. (r)

But that's as far as I can get. What's going on?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

太阳哥哥 2025-01-16 14:35:53

您想要的表格是由 javascript 生成的,因此实际上并不存在于您尝试抓取的静态 html 中。 JavaScript 下载一个单独的 json 文件,其中包含表格每个页面的所有数据。

这实际上是个好消息 - 这意味着您可以一次性获取您尝试抓取的所有 134 页数据的条目。我们可以在浏览器的开发者选项卡中找到 json 文件的 url 并使用它。经过一些争论,我们在一个数据框中获得了所有数据。这是完整的表示:

library(httr)

url    <- "https://www.neurosynth.org/api/locations/2_2_2_6/compare?_=1645644227258"
result <- content(GET(url), "parsed")$data
names  <- c("Name", "z_score", "post_prob", "func_con", "meta_analytic")
df     <- do.call(rbind, lapply(result, function(x) setNames(as.data.frame(x), names)))
df$z_score <- as.numeric(df$z_score)
#> Warning: NAs introduced by coercion
df <- df[order(-df$z_score), ]

现在我们将数据放在一个漂亮的数据框中:

head(df)
#>         Name z_score post_prob func_con meta_analytic
#> 760       mm    8.78      0.86     0.15          0.52
#> 509    gamma    8.10      0.85     0.19          0.63
#> 1135 sources    6.46      0.77     0.10          0.32
#> 825    noise    5.33      0.73     0.00          0.08
#> 671  lesions    4.66      0.72    -0.01          0.00
#> 1137 spatial    4.57      0.63    -0.15          0.00

并且我们拥有所有数据:

nrow(df)
#> [1] 1334

创建于 2022 年 2 月 23 日,由 reprex 包 (v2.0.1)

The table you want is generated by javascript, so doesn't actually exist within the static html you are trying to scrape. The javascript downloads a separate json file that contains all the data for every page of the table.

This is actually good news - it means you can get the entries for all 134 pages of the data you are trying to scrape all at once. We can find the json file's url in the developer tab in the browser and use that. With a little bit of wrangling we get all the data in a single data frame. Here's a full reprex:

library(httr)

url    <- "https://www.neurosynth.org/api/locations/2_2_2_6/compare?_=1645644227258"
result <- content(GET(url), "parsed")$data
names  <- c("Name", "z_score", "post_prob", "func_con", "meta_analytic")
df     <- do.call(rbind, lapply(result, function(x) setNames(as.data.frame(x), names)))
df$z_score <- as.numeric(df$z_score)
#> Warning: NAs introduced by coercion
df <- df[order(-df$z_score), ]

Now we have the data in a nice data frame:

head(df)
#>         Name z_score post_prob func_con meta_analytic
#> 760       mm    8.78      0.86     0.15          0.52
#> 509    gamma    8.10      0.85     0.19          0.63
#> 1135 sources    6.46      0.77     0.10          0.32
#> 825    noise    5.33      0.73     0.00          0.08
#> 671  lesions    4.66      0.72    -0.01          0.00
#> 1137 spatial    4.57      0.63    -0.15          0.00

And we have all the data:

nrow(df)
#> [1] 1334

Created on 2022-02-23 by the reprex package (v2.0.1)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文