从 R 中的雅虎财经中提取历史分析师意见

发布于 2024-12-06 13:45:28 字数 3163 浏览 1 评论 0原文

雅虎财经拥有有关股票的历史分析师观点的数据。我有兴趣将这些数据提取到 R 中进行分析,这是我到目前为止所得到的:

getOpinions <- function(symbol) {
    require(XML)
    require(xts)
    yahoo.URL <- "http://finance.yahoo.com/q/ud?"
    tables <- readHTMLTable(paste(yahoo.URL, "s=", symbol, sep = ""), stringsAsFactors=FALSE)
    Data <- tables[[11]]
    Data$Date <- as.Date(Data$Date,'%d-%b-%y')
    Data <- xts(Data[,-1],order.by=Data[,1])
    Data
}

getOpinions('AAPL')

我担心如果表的位置(当前为 11)发生变化,这段代码会中断,但我想不出一种优雅的方法来检测哪个表具有我想要的数据。我尝试了 此处发布的解决方案,但似乎对这个问题不起作用。

有没有更好的方法来抓取这些数据,并且在雅虎重新安排其网站时不太可能损坏?

编辑:看起来已经有一个包(fImport)在那里做这件事。

library(fImport)
yahooBriefing("AAPL")

这是他们的解决方案,它不返回 xts 对象,并且如果页面布局发生变化,可能会中断(fImport 中的 yahooKeystats 函数已经中断):

function (query, file = "tempfile", source = NULL, save = FALSE, 
    try = TRUE) 
{
    if (is.null(source)) 
        source = "http://finance.yahoo.com/q/ud?s="
    if (try) {
        z = try(yahooBriefing(query, file, source, save, try = FALSE))
        if (class(z) == "try-error" || class(z) == "Error") {
            return("No Internet Access")
        }
        else {
            return(z)
        }
    }
    else {
        url = paste(source, query, sep = "")
        download.file(url = url, destfile = file)
        x = scan(file, what = "", sep = "\n")
        x = x[grep("Briefing.com", x)]
        x = gsub("</", "<", x, perl = TRUE)
        x = gsub("/", " / ", x, perl = TRUE)
        x = gsub(" class=.yfnc_tabledata1.", "", x, perl = TRUE)
        x = gsub(" align=.center.", "", x, perl = TRUE)
        x = gsub(" cell.......=...", "", x, perl = TRUE)
        x = gsub(" border=...", "", x, perl = TRUE)
        x = gsub(" color=.red.", "", x, perl = TRUE)
        x = gsub(" color=.green.", "", x, perl = TRUE)
        x = gsub("<.>", "", x, perl = TRUE)
        x = gsub("<td>", "@", x, perl = TRUE)
        x = gsub("<..>", "", x, perl = TRUE)
        x = gsub("<...>", "", x, perl = TRUE)
        x = gsub("<....>", "", x, perl = TRUE)
        x = gsub("<table>", "", x, perl = TRUE)
        x = gsub("<td nowrap", "", x, perl = TRUE)
        x = gsub("<td height=....", "", x, perl = TRUE)
        x = gsub("&amp;", "&", x, perl = TRUE)
        x = unlist(strsplit(x, ">"))
        x = x[grep("-...-[90]", x, perl = TRUE)]
        nX = length(x)
        x[nX] = gsub("@$", "", x[nX], perl = TRUE)
        x = unlist(strsplit(x, "@"))
        x[x == ""] = "NA"
        x = matrix(x, byrow = TRUE, ncol = 9)[, -c(2, 4, 6, 8)]
        x[, 1] = as.character(strptime(x[, 1], format = "%d-%b-%y"))
        colnames(x) = c("Date", "ResearchFirm", "Action", "From", 
            "To")
        x = x[nrow(x):1, ]
        X = as.data.frame(x)
    }
    X
}

Yahoo Finance has data on historic analyst opinions for stocks. I'm interested in pulling this data into R for analysis, and here is what I have so far:

getOpinions <- function(symbol) {
    require(XML)
    require(xts)
    yahoo.URL <- "http://finance.yahoo.com/q/ud?"
    tables <- readHTMLTable(paste(yahoo.URL, "s=", symbol, sep = ""), stringsAsFactors=FALSE)
    Data <- tables[[11]]
    Data$Date <- as.Date(Data$Date,'%d-%b-%y')
    Data <- xts(Data[,-1],order.by=Data[,1])
    Data
}

getOpinions('AAPL')

I'm worried that this code will break if the position of the table (currently 11) changes, but I can't think of an elegant way to detect which table has the data I want. I tried the solution posted here, but it doesn't seem to work for this problem.

Is there a better way to scrape this data that is less likely to break if yahoo re-arranges their site?

edit: it looks like there's already a package (fImport) out there to do this.

library(fImport)
yahooBriefing("AAPL")

Here is their solution, which doesn't return an xts object, and will probably break if the page layout changes (the yahooKeystats function in fImport is already broken):

function (query, file = "tempfile", source = NULL, save = FALSE, 
    try = TRUE) 
{
    if (is.null(source)) 
        source = "http://finance.yahoo.com/q/ud?s="
    if (try) {
        z = try(yahooBriefing(query, file, source, save, try = FALSE))
        if (class(z) == "try-error" || class(z) == "Error") {
            return("No Internet Access")
        }
        else {
            return(z)
        }
    }
    else {
        url = paste(source, query, sep = "")
        download.file(url = url, destfile = file)
        x = scan(file, what = "", sep = "\n")
        x = x[grep("Briefing.com", x)]
        x = gsub("</", "<", x, perl = TRUE)
        x = gsub("/", " / ", x, perl = TRUE)
        x = gsub(" class=.yfnc_tabledata1.", "", x, perl = TRUE)
        x = gsub(" align=.center.", "", x, perl = TRUE)
        x = gsub(" cell.......=...", "", x, perl = TRUE)
        x = gsub(" border=...", "", x, perl = TRUE)
        x = gsub(" color=.red.", "", x, perl = TRUE)
        x = gsub(" color=.green.", "", x, perl = TRUE)
        x = gsub("<.>", "", x, perl = TRUE)
        x = gsub("<td>", "@", x, perl = TRUE)
        x = gsub("<..>", "", x, perl = TRUE)
        x = gsub("<...>", "", x, perl = TRUE)
        x = gsub("<....>", "", x, perl = TRUE)
        x = gsub("<table>", "", x, perl = TRUE)
        x = gsub("<td nowrap", "", x, perl = TRUE)
        x = gsub("<td height=....", "", x, perl = TRUE)
        x = gsub("&", "&", x, perl = TRUE)
        x = unlist(strsplit(x, ">"))
        x = x[grep("-...-[90]", x, perl = TRUE)]
        nX = length(x)
        x[nX] = gsub("@$", "", x[nX], perl = TRUE)
        x = unlist(strsplit(x, "@"))
        x[x == ""] = "NA"
        x = matrix(x, byrow = TRUE, ncol = 9)[, -c(2, 4, 6, 8)]
        x[, 1] = as.character(strptime(x[, 1], format = "%d-%b-%y"))
        colnames(x) = c("Date", "ResearchFirm", "Action", "From", 
            "To")
        x = x[nrow(x):1, ]
        X = as.data.frame(x)
    }
    X
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

乞讨 2024-12-13 13:45:28

这是您可以使用的技巧。在您的函数中,添加以下内容

# GET THE POSITION OF TABLE WITH MAX. ROWS
position = which.max(sapply(tables, NROW))
Data     = tables[[position]]

只要页面上最长的表格是您要查找的内容,这就会起作用。

如果你想让它更健壮一点,这里有另一种方法

# GET POSITION OF TABLE CONTAINING RESEARCH FIRM IN ITS NAMES
position = sapply(tables, function(tab) 'Research Firm' %in% names(tab))
Data     = tables[position == TRUE]

Here is a hack you can use. Inside your function, add the following

# GET THE POSITION OF TABLE WITH MAX. ROWS
position = which.max(sapply(tables, NROW))
Data     = tables[[position]]

This will work as long as the longest table on the page is what you seek.

If you want to make it a little more robust, here is another approach

# GET POSITION OF TABLE CONTAINING RESEARCH FIRM IN ITS NAMES
position = sapply(tables, function(tab) 'Research Firm' %in% names(tab))
Data     = tables[position == TRUE]
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文