加入以获取州一级的Movil网络运营商的报道
fdb是 this 我需要加入它与Block组的HHS数量相关,但它无法正常工作, geeks for Geeks 说这会是因为合并后的人口普查区块数量(第12个首先是第12位)在合并后的数量不如说它是在乞ginig上的那样,我是期望这种加入会给我226773。这并不好像它正在合并更多,但它给了我更少的东西,我找不到正确的潜在客户。我真的不明白这些作品给我带来了超过239780的人口普查区块,它们根据tidycensus。有人可以吗?
library(tidycensus)
fdb <- read.csv("fbd_us_with_satellite_dec2020_v1.csv")
abbr <- c("AL","AK","AZ","AR","CA","CO","CT","DE","DC","FL","GA","HI","ID","IL","IN","IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV",
"NH","NJ","NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA","WA","WV","WI","WY") # 51 states
HH_units <- get_acs(geography = "block group", variables = c(households = "B25001_001"), state = abbr) # same as B01003_001 for income level # HHs
HH_units$households <- HH_units$estimate
library(dplyr)
HH_units <- HH_units %>%
select(
GEOID,
NAME,
households
)
unique(fdb$BlockCode) # 11164855 rows
fdb$GEOID <- substring(fdb$BlockCode, 1, 12)
unique(fdb$GEOID) # it's got 226773 block lines
# Apparently you got to increase R's memory limit to join huge data sets
memory.limit()
memory.limit(400000)
all <- merge(x = fdb,
y = HH_units,
by = "GEOID")
unique(all$GEOID) # 138625 rows and not 226773
all2 <- full_join(x = fdb,
y = HH_units,
by = "GEOID")
unique(all2$GEOID) # 326928
all3 <- right_join(x = fdb,
y = HH_units,
by = "GEOID")
unique(all3$GEOID) # 239780
fdb is this file and I need to join it with number of HHs by block group but it's not working as Geeks for Geeks says it will because the number of census blocks (12 first in BlockCode) at the end isn't as many after merging as it says it is at the begginig, I was expecting that this join would give me 226773. This isn't as if it was merging more but it's giving me less and I can't find the right lead. I really don't understand the pieces where it gives me more than 239780 Census blocks they are according to tidyCensus. Could someone please?
library(tidycensus)
fdb <- read.csv("fbd_us_with_satellite_dec2020_v1.csv")
abbr <- c("AL","AK","AZ","AR","CA","CO","CT","DE","DC","FL","GA","HI","ID","IL","IN","IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV",
"NH","NJ","NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA","WA","WV","WI","WY") # 51 states
HH_units <- get_acs(geography = "block group", variables = c(households = "B25001_001"), state = abbr) # same as B01003_001 for income level # HHs
HH_units$households <- HH_units$estimate
library(dplyr)
HH_units <- HH_units %>%
select(
GEOID,
NAME,
households
)
unique(fdb$BlockCode) # 11164855 rows
fdb$GEOID <- substring(fdb$BlockCode, 1, 12)
unique(fdb$GEOID) # it's got 226773 block lines
# Apparently you got to increase R's memory limit to join huge data sets
memory.limit()
memory.limit(400000)
all <- merge(x = fdb,
y = HH_units,
by = "GEOID")
unique(all$GEOID) # 138625 rows and not 226773
all2 <- full_join(x = fdb,
y = HH_units,
by = "GEOID")
unique(all2$GEOID) # 326928
all3 <- right_join(x = fdb,
y = HH_units,
by = "GEOID")
unique(all3$GEOID) # 239780
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论