从 R 中的雅虎财经中提取历史分析师意见
雅虎财经拥有有关股票的历史分析师观点的数据。我有兴趣将这些数据提取到 R 中进行分析,这是我到目前为止所得到的:
getOpinions <- function(symbol) {
require(XML)
require(xts)
yahoo.URL <- "http://finance.yahoo.com/q/ud?"
tables <- readHTMLTable(paste(yahoo.URL, "s=", symbol, sep = ""), stringsAsFactors=FALSE)
Data <- tables[[11]]
Data$Date <- as.Date(Data$Date,'%d-%b-%y')
Data <- xts(Data[,-1],order.by=Data[,1])
Data
}
getOpinions('AAPL')
我担心如果表的位置(当前为 11)发生变化,这段代码会中断,但我想不出一种优雅的方法来检测哪个表具有我想要的数据。我尝试了 此处发布的解决方案,但似乎对这个问题不起作用。
有没有更好的方法来抓取这些数据,并且在雅虎重新安排其网站时不太可能损坏?
编辑:看起来已经有一个包(fImport)在那里做这件事。
library(fImport)
yahooBriefing("AAPL")
这是他们的解决方案,它不返回 xts 对象,并且如果页面布局发生变化,可能会中断(fImport 中的 yahooKeystats 函数已经中断):
function (query, file = "tempfile", source = NULL, save = FALSE,
try = TRUE)
{
if (is.null(source))
source = "http://finance.yahoo.com/q/ud?s="
if (try) {
z = try(yahooBriefing(query, file, source, save, try = FALSE))
if (class(z) == "try-error" || class(z) == "Error") {
return("No Internet Access")
}
else {
return(z)
}
}
else {
url = paste(source, query, sep = "")
download.file(url = url, destfile = file)
x = scan(file, what = "", sep = "\n")
x = x[grep("Briefing.com", x)]
x = gsub("</", "<", x, perl = TRUE)
x = gsub("/", " / ", x, perl = TRUE)
x = gsub(" class=.yfnc_tabledata1.", "", x, perl = TRUE)
x = gsub(" align=.center.", "", x, perl = TRUE)
x = gsub(" cell.......=...", "", x, perl = TRUE)
x = gsub(" border=...", "", x, perl = TRUE)
x = gsub(" color=.red.", "", x, perl = TRUE)
x = gsub(" color=.green.", "", x, perl = TRUE)
x = gsub("<.>", "", x, perl = TRUE)
x = gsub("<td>", "@", x, perl = TRUE)
x = gsub("<..>", "", x, perl = TRUE)
x = gsub("<...>", "", x, perl = TRUE)
x = gsub("<....>", "", x, perl = TRUE)
x = gsub("<table>", "", x, perl = TRUE)
x = gsub("<td nowrap", "", x, perl = TRUE)
x = gsub("<td height=....", "", x, perl = TRUE)
x = gsub("&", "&", x, perl = TRUE)
x = unlist(strsplit(x, ">"))
x = x[grep("-...-[90]", x, perl = TRUE)]
nX = length(x)
x[nX] = gsub("@$", "", x[nX], perl = TRUE)
x = unlist(strsplit(x, "@"))
x[x == ""] = "NA"
x = matrix(x, byrow = TRUE, ncol = 9)[, -c(2, 4, 6, 8)]
x[, 1] = as.character(strptime(x[, 1], format = "%d-%b-%y"))
colnames(x) = c("Date", "ResearchFirm", "Action", "From",
"To")
x = x[nrow(x):1, ]
X = as.data.frame(x)
}
X
}
Yahoo Finance has data on historic analyst opinions for stocks. I'm interested in pulling this data into R for analysis, and here is what I have so far:
getOpinions <- function(symbol) {
require(XML)
require(xts)
yahoo.URL <- "http://finance.yahoo.com/q/ud?"
tables <- readHTMLTable(paste(yahoo.URL, "s=", symbol, sep = ""), stringsAsFactors=FALSE)
Data <- tables[[11]]
Data$Date <- as.Date(Data$Date,'%d-%b-%y')
Data <- xts(Data[,-1],order.by=Data[,1])
Data
}
getOpinions('AAPL')
I'm worried that this code will break if the position of the table (currently 11) changes, but I can't think of an elegant way to detect which table has the data I want. I tried the solution posted here, but it doesn't seem to work for this problem.
Is there a better way to scrape this data that is less likely to break if yahoo re-arranges their site?
edit: it looks like there's already a package (fImport) out there to do this.
library(fImport)
yahooBriefing("AAPL")
Here is their solution, which doesn't return an xts object, and will probably break if the page layout changes (the yahooKeystats function in fImport is already broken):
function (query, file = "tempfile", source = NULL, save = FALSE,
try = TRUE)
{
if (is.null(source))
source = "http://finance.yahoo.com/q/ud?s="
if (try) {
z = try(yahooBriefing(query, file, source, save, try = FALSE))
if (class(z) == "try-error" || class(z) == "Error") {
return("No Internet Access")
}
else {
return(z)
}
}
else {
url = paste(source, query, sep = "")
download.file(url = url, destfile = file)
x = scan(file, what = "", sep = "\n")
x = x[grep("Briefing.com", x)]
x = gsub("</", "<", x, perl = TRUE)
x = gsub("/", " / ", x, perl = TRUE)
x = gsub(" class=.yfnc_tabledata1.", "", x, perl = TRUE)
x = gsub(" align=.center.", "", x, perl = TRUE)
x = gsub(" cell.......=...", "", x, perl = TRUE)
x = gsub(" border=...", "", x, perl = TRUE)
x = gsub(" color=.red.", "", x, perl = TRUE)
x = gsub(" color=.green.", "", x, perl = TRUE)
x = gsub("<.>", "", x, perl = TRUE)
x = gsub("<td>", "@", x, perl = TRUE)
x = gsub("<..>", "", x, perl = TRUE)
x = gsub("<...>", "", x, perl = TRUE)
x = gsub("<....>", "", x, perl = TRUE)
x = gsub("<table>", "", x, perl = TRUE)
x = gsub("<td nowrap", "", x, perl = TRUE)
x = gsub("<td height=....", "", x, perl = TRUE)
x = gsub("&", "&", x, perl = TRUE)
x = unlist(strsplit(x, ">"))
x = x[grep("-...-[90]", x, perl = TRUE)]
nX = length(x)
x[nX] = gsub("@$", "", x[nX], perl = TRUE)
x = unlist(strsplit(x, "@"))
x[x == ""] = "NA"
x = matrix(x, byrow = TRUE, ncol = 9)[, -c(2, 4, 6, 8)]
x[, 1] = as.character(strptime(x[, 1], format = "%d-%b-%y"))
colnames(x) = c("Date", "ResearchFirm", "Action", "From",
"To")
x = x[nrow(x):1, ]
X = as.data.frame(x)
}
X
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这是您可以使用的技巧。在您的函数中,添加以下内容
只要页面上最长的表格是您要查找的内容,这就会起作用。
如果你想让它更健壮一点,这里有另一种方法
Here is a hack you can use. Inside your function, add the following
This will work as long as the longest table on the page is what you seek.
If you want to make it a little more robust, here is another approach