rgdal 有效读取大型多波段栅格
我正在使用 rgdal 包在 R 中编写图像分类脚本。所讨论的栅格是一个具有 28 个通道的 PCIDSK 文件:一个训练数据通道、一个验证数据通道和 26 个光谱数据通道。目标是填充一个数据帧,其中包含训练数据通道中不为零的每个像素的值,以及 26 个波段中的相关光谱值。
在Python/Numpy中,我可以轻松地将整个图像的所有波段导入到多维数组中,但是,由于内存限制,R中的唯一选择似乎是逐块导入此数据,这非常慢
library(rgdal)
raster = "image.pix"
training_band = 2
validation_band = 1
BlockWidth = 500
BlockHeight = 500
# Get some metadata about the whole raster
myinfo = GDALinfo(raster)
ysize = myinfo[[1]]
xsize = myinfo[[2]]
numbands = myinfo[[3]]
# Iterate through the image in blocks and retrieve the training data
column = 0
training_data = NULL
while(column < xsize){
if(column + BlockWidth > xsize){
BlockWidth = xsize - column
}
row = 0
while(row < ysize){
if(row + BlockHeight > ysize){
BlockHeight = ysize - row
}
# Do stuff here
myblock = readGDAL(raster, region.dim = c(BlockHeight,BlockWidth), offset = c(row, column), band = c(training_band,3:numbands), silent = TRUE)
blockdata = matrix(NA, dim(myblock)[1], dim(myblock)[2])
for(i in 1:(dim(myblock)[2])){
bandname = paste("myblock", names(myblock)[i], sep="$")
blockdata[,i]= as.matrix(eval(parse(text=bandname)))
}
blockdata = as.data.frame(blockdata)
blockdata = subset(blockdata, blockdata[,1] > 0)
if (dim(blockdata)[1] > 0){
training_data = rbind(training_data, blockdata)
}
row = row + BlockHeight
}
column = column + BlockWidth
}
remove(blockdata, myblock, BlockHeight, BlockWidth, row, column)
:有没有更快/更好的方法可以在不耗尽内存的情况下做同样的事情?
收集训练数据后的下一步是创建分类器(randomForest 包),这也需要大量内存,具体取决于请求的树数量。这引出了我的第二个问题,即考虑到训练数据已经占用的内存量,创建一个由 500 棵树组成的森林是不可能的:
myformula = formula(paste("as.factor(V1) ~ V3:V", dim(training_data)[2], sep=""))
r_tree = randomForest(formula = myformula, data = training_data, ntree = 500, keep.forest=TRUE)
有没有办法分配更多内存?我错过了什么吗?谢谢...
[编辑] 正如 Jan 所建议的,使用“raster”包要快得多;然而据我所知,就收集训练数据而言,它并没有解决内存问题,因为它最终需要位于内存中的数据帧中:
library(raster)
library(randomForest)
# Set some user variables
fn = "image.pix"
training_band = 2
validation_band = 1
# Get the training data
myraster = stack(fn)
training_class = subset(myraster, training_band)
training_class[training_class == 0] = NA
training_class = Which(training_class != 0, cells=TRUE)
training_data = extract(myraster, training_class)
training_data = as.data.frame(training_data)
因此,虽然这要快得多(并且需要更少的代码),它仍然没有解决没有足够的可用内存来创建分类器的问题...是否有一些我还没有找到的“光栅”包函数可以完成此任务?谢谢...
I am working on an image classification script in R using the rgdal package. The raster in question is a PCIDSK file with 28 channels: a training data channel, a validation data channel, and 26 spectral data channels. The objective is to populate a data frame containing the values of each pixel which is not zero in the training data channel, plus the associated spectral values in the 26 bands.
In Python/Numpy, I can easily import all the bands for the entire image into a multi-dimensional array, however, due to memory limitations the only option in R seems to be importing this data block by block, which is very slow:
library(rgdal)
raster = "image.pix"
training_band = 2
validation_band = 1
BlockWidth = 500
BlockHeight = 500
# Get some metadata about the whole raster
myinfo = GDALinfo(raster)
ysize = myinfo[[1]]
xsize = myinfo[[2]]
numbands = myinfo[[3]]
# Iterate through the image in blocks and retrieve the training data
column = 0
training_data = NULL
while(column < xsize){
if(column + BlockWidth > xsize){
BlockWidth = xsize - column
}
row = 0
while(row < ysize){
if(row + BlockHeight > ysize){
BlockHeight = ysize - row
}
# Do stuff here
myblock = readGDAL(raster, region.dim = c(BlockHeight,BlockWidth), offset = c(row, column), band = c(training_band,3:numbands), silent = TRUE)
blockdata = matrix(NA, dim(myblock)[1], dim(myblock)[2])
for(i in 1:(dim(myblock)[2])){
bandname = paste("myblock", names(myblock)[i], sep="$")
blockdata[,i]= as.matrix(eval(parse(text=bandname)))
}
blockdata = as.data.frame(blockdata)
blockdata = subset(blockdata, blockdata[,1] > 0)
if (dim(blockdata)[1] > 0){
training_data = rbind(training_data, blockdata)
}
row = row + BlockHeight
}
column = column + BlockWidth
}
remove(blockdata, myblock, BlockHeight, BlockWidth, row, column)
Is there a faster/better way of doing the same thing without running out of memory?
The next step after this training data is collected is to create the classifier (randomForest package) which also requires a lot of memory, depending on the number of trees requested. This brings me to my second problem, which is that creating a forest of 500 trees is not possible given the amount of memory already occupied by the training data:
myformula = formula(paste("as.factor(V1) ~ V3:V", dim(training_data)[2], sep=""))
r_tree = randomForest(formula = myformula, data = training_data, ntree = 500, keep.forest=TRUE)
Is there a way to allocate more memory? Am I missing something? Thanks...
[EDIT]
As suggested by Jan, using the "raster" package is much faster; however as far as I can tell, it does not solve the memory problem as far as gathering the training data is concerned because it eventually needs to be in a dataframe, in memory:
library(raster)
library(randomForest)
# Set some user variables
fn = "image.pix"
training_band = 2
validation_band = 1
# Get the training data
myraster = stack(fn)
training_class = subset(myraster, training_band)
training_class[training_class == 0] = NA
training_class = Which(training_class != 0, cells=TRUE)
training_data = extract(myraster, training_class)
training_data = as.data.frame(training_data)
So while this is much faster (and takes less code), it still does not solve the issue of not having enough free memory to create the classifier... Is there some "raster" package function that I have not found that can accomplish this? Thanks...
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
查看光栅包。 Raster 包为 Rgdal 提供了一个方便的包装器,无需将其加载到内存中。
http://raster.r-forge.r-project.org/
希望这个帮助。
通过使用 Raster 包,您可以避免在使用 randomForest 之前填充内存。
[编辑] 为了解决随机森林的内存问题,如果您可以通过子样本(大小<
Check out the Raster package. The Raster package provides a handy wrapper for Rgdal without loading it into memory.
http://raster.r-forge.r-project.org/
Hopefully this help.
By using the Raster package you can avoid filling your memory before using randomForest.
[EDIT] To solve the memory problem with randomForest maybe it helps if you could learn the individual trees within the random forest on subsamples (of size << n) rather than bootstrap samples (of size n).
我认为这里的关键是:“包含训练数据通道中不为零的每个像素值的数据帧”。如果生成的 data.frame 小到足以保存在内存中,您可以通过仅读取该带区来确定这一点,然后修剪为仅那些非零值,然后尝试创建一个包含那么多行和总行数的 data.frame你想要的专栏。
你能运行这个吗?
然后,您可以通过单独读取每个带并根据训练带进行修剪来逐一填充 data.frame 的列。
如果该 data.frame 太大,那么您就会陷入困境 - 我不知道 randomForest 是否可以使用“ff”中的内存映射数据对象,但可能值得尝试。
编辑:一些示例代码,请注意,栅格为您提供内存映射访问,但问题是 randomForest 是否可以使用内存映射数据结构。您可以只读入您需要的数据,一次一个波段 - 您可能希望首先尝试构建完整的 data.frame,而不是附加列。
另外,如果您可以从一开始就生成完整的 data.frame,那么您就会知道它是否应该工作。通过在代码中执行 rbind() 操作,您确实需要越来越大的连续内存块,而这可能是可以避免的。
I think the key here is this: " a data frame containing the values of each pixel which is not zero in the training data channel". If the resulting data.frame is small enough to hold in memory you could determine this by reading just that band, then trimming to only those non-zero values, then try to create a data.frame with that many rows and the total number of colums you want.
Can you run this?
Then you could populate the data.frame's columns one by one by reading each band separately and trimming as for the training band.
If that data.frame is too big then you're stuck - I don't know if randomForest can use the memory-mapped data objects in "ff", but it might be worth trying.
EDIT: some example code, and note that raster gives you memory-mapped access but the problem is whether randomForest can use memory mapped data structures. You can possibly read in only the data you need, one band at a time - you would want to try to build the full data.frame first, rather than append columns.
Also, if you can generate the full data.frame from the start then you'll know if it should work. By rbind()ing your way through as your code does you need increasingly larger chunks of contiguous memory, and that might be avoidable.
这不是您正在做的事情的精确解决方案,但我只是将其发布在这里供可能看到此内容的其他人使用,这就是我管理非常大的栅格内存的方式。
Not an exact solution to what you are doing but I'm just posting this here for others that may see this, this is how I manage memory for very large rasters.