我有非常大的表(3000 万行),我想将其作为 R 中的数据帧加载。read.table()
有很多方便的功能,但似乎有很多实施中的逻辑会减慢速度。就我而言,我假设我提前知道列的类型,该表不包含任何列标题或行名称,并且没有任何我必须担心的病态字符。
我知道使用 scan()
将表作为列表读取可能会非常快,例如:
datalist <- scan('myfile',sep='\t',list(url='',popularity=0,mintime=0,maxtime=0)))
但是我将其转换为数据帧的一些尝试似乎会降低上述性能一个因素6:
df <- as.data.frame(scan('myfile',sep='\t',list(url='',popularity=0,mintime=0,maxtime=0))))
有更好的方法吗?或者很可能完全不同的解决问题的方法?
I have very large tables (30 million rows) that I would like to load as a dataframes in R. read.table()
has a lot of convenient features, but it seems like there is a lot of logic in the implementation that would slow things down. In my case, I am assuming I know the types of the columns ahead of time, the table does not contain any column headers or row names, and does not have any pathological characters that I have to worry about.
I know that reading in a table as a list using scan()
can be quite fast, e.g.:
datalist <- scan('myfile',sep='\t',list(url='',popularity=0,mintime=0,maxtime=0)))
But some of my attempts to convert this to a dataframe appear to decrease the performance of the above by a factor of 6:
df <- as.data.frame(scan('myfile',sep='\t',list(url='',popularity=0,mintime=0,maxtime=0))))
Is there a better way of doing this? Or quite possibly completely different approach to the problem?
发布评论
评论(12)
几年后的更新
这个答案已经过时了,R 已经继续前进。调整
read.table
运行一下更快的好处很少。您的选择是:使用
vroom
来自 tidyverse 包
vroom
,用于将数据从 csv/制表符分隔文件直接导入到 R tibble 中。请参阅赫克托的回答。在
fread
"https://cran.r-project.org/web/packages/data.table/index.html" rel="noreferrer">data.table
用于从 csv 导入数据/制表符分隔的文件直接导入 R。请参阅 mnel 的回答。在
read_table
“https://cran.r-project.org/web/packages/readr/index.html”rel="noreferrer">readr
(2015 年 4 月在 CRAN 上)。这与上面的fread
非常相似。链接中的自述文件解释了两个函数之间的差异(readr
目前声称比data.table::fread
“慢 1.5-2 倍”代码>).read.csv.raw
来自iotools
提供了第三个选项用于快速读取 CSV 文件。尝试在数据库而不是平面文件中存储尽可能多的数据。 (除了作为更好的永久存储介质之外,数据以二进制格式传入和传出 R,速度更快。)
sqldf
包,如 JD 中所述Long 的答案,将数据导入到临时 SQLite 数据库中,然后将其读入 R。另请参阅:RODBC
包,以及DBI
包 页面。MonetDB.R
给出您是一种假装是数据框架的数据类型,但实际上是底层的 MonetDB,从而提高了性能。使用其monetdb.read.csv
函数。dplyr
允许您直接工作数据存储在多种类型的数据库中。以二进制格式存储数据也有助于提高性能。使用
saveRDS
/readRDS
(见下文),h5
或rhdf5
HDF5 格式的软件包,或fst
包。原始答案
无论您使用 read.table 还是 scan,都有一些简单的事情可以尝试。
设置
nrows
=数据中的记录数(扫描
中的nmax
)。确保
comment.char=""
关闭注释解释。使用
read.table
中的colClasses
显式定义每列的类。设置
multi.line=FALSE
还可以提高扫描性能。如果这些方法都不起作用,请使用分析包之一来确定哪些线路正在减慢速度。也许您可以根据结果编写
read.table
的精简版本。另一种选择是在将数据读入 R 之前对其进行过滤。
或者,如果问题是您必须定期读取数据,则使用这些方法一次性读取数据,然后将数据帧保存为二进制 blob
保存
saveRDS
,那么下次您可以使用 < 更快地检索它del>加载
读取RDS。
An update, several years later
This answer is old, and R has moved on. Tweaking
read.table
to run a bit faster has precious little benefit. Your options are:Using
vroom
from the tidyverse packagevroom
for importing data from csv/tab-delimited files directly into an R tibble. See Hector's answer.Using
fread
indata.table
for importing data from csv/tab-delimited files directly into R. See mnel's answer.Using
read_table
inreadr
(on CRAN from April 2015). This works much likefread
above. The readme in the link explains the difference between the two functions (readr
currently claims to be "1.5-2x slower" thandata.table::fread
).read.csv.raw
fromiotools
provides a third option for quickly reading CSV files.Trying to store as much data as you can in databases rather than flat files. (As well as being a better permanent storage medium, data is passed to and from R in a binary format, which is faster.)
read.csv.sql
in thesqldf
package, as described in JD Long's answer, imports data into a temporary SQLite database and then reads it into R. See also: theRODBC
package, and the reverse depends section of theDBI
package page.MonetDB.R
gives you a data type that pretends to be a data frame but is really a MonetDB underneath, increasing performance. Import data with itsmonetdb.read.csv
function.dplyr
allows you to work directly with data stored in several types of database.Storing data in binary formats can also be useful for improving performance. Use
saveRDS
/readRDS
(see below), theh5
orrhdf5
packages for HDF5 format, orwrite_fst
/read_fst
from thefst
package.The original answer
There are a couple of simple things to try, whether you use read.table or scan.
Set
nrows
=the number of records in your data (nmax
inscan
).Make sure that
comment.char=""
to turn off interpretation of comments.Explicitly define the classes of each column using
colClasses
inread.table
.Setting
multi.line=FALSE
may also improve performance in scan.If none of these thing work, then use one of the profiling packages to determine which lines are slowing things down. Perhaps you can write a cut down version of
read.table
based on the results.The other alternative is filtering your data before you read it into R.
Or, if the problem is that you have to read it in regularly, then use these methods to read the data in once, then save the data frame as a binary blob with
save
saveRDS
, then next time you can retrieve it faster withload
readRDS
.这是一个使用
data.table
1.8.7 中的fread
的示例这些示例来自
fread
的帮助页面,时间安排在我的Windows XP Core 2 双核 E8400。标准 read.table
优化 read.table
fread
sqldf
ff / ffdf
总结:
Here is an example that utilizes
fread
fromdata.table
1.8.7The examples come from the help page to
fread
, with the timings on my windows XP Core 2 duo E8400.standard read.table
optimized read.table
fread
sqldf
ff / ffdf
In summary:
我最初没有看到这个问题,几天后又问了一个类似的问题。我将记下之前的问题,但我想我应该在此处添加一个答案来解释我如何使用 sqldf() 来执行此操作。
关于最佳方式,已经有一些讨论将 2GB 或更多文本数据导入到 R 数据框中。昨天我写了一篇关于使用
sqldf()
导入的 博客文章将数据放入 SQLite 作为暂存区域,然后将其从 SQLite 吸入 R。这对我来说非常有效。我能够在 << 中提取 2GB(3 列,40 毫米行)的数据。 5分钟。相比之下,read.csv
命令运行了一整晚,但从未完成。这是我的测试代码:
设置测试数据:
在运行以下导入例程之前我重新启动了 R:
我让以下行运行了一整晚,但它从未完成:
I didn't see this question initially and asked a similar question a few days later. I am going to take my previous question down, but I thought I'd add an answer here to explain how I used
sqldf()
to do this.There's been little bit of discussion as to the best way to import 2GB or more of text data into an R data frame. Yesterday I wrote a blog post about using
sqldf()
to import the data into SQLite as a staging area, and then sucking it from SQLite into R. This works really well for me. I was able to pull in 2GB (3 columns, 40mm rows) of data in < 5 minutes. By contrast, theread.csv
command ran all night and never completed.Here's my test code:
Set up the test data:
I restarted R before running the following import routine:
I let the following line run all night but it never completed:
奇怪的是,多年来没有人回答问题的底部部分,尽管这是一个重要的问题 - data.frame 只是具有正确属性的列表,因此如果您有大量数据,则不需要不想使用
as.data.frame
或类似的列表。简单地将列表“转换”为数据框就地要快得多:这不会复制数据,因此它是直接的(与所有其他方法不同)。它假设您已经在列表上相应地设置了
names()
。[至于将大数据加载到 R 中——就我个人而言,我将它们按列转储到二进制文件中并使用 readBin() ——这是迄今为止最快的方法(除了 mmapping 之外),并且仅受以下限制:磁盘速度。与二进制数据相比,解析 ASCII 文件本身就很慢(即使在 C 语言中也是如此)。]
Strangely, no one answered the bottom part of the question for years even though this is an important one --
data.frame
s are simply lists with the right attributes, so if you have large data you don't want to useas.data.frame
or similar for a list. It's much faster to simply "turn" a list into a data frame in-place:This makes no copy of the data so it's immediate (unlike all other methods). It assumes that you have already set
names()
on the list accordingly.[As for loading large data into R -- personally, I dump them by column into binary files and use
readBin()
- that is by far the fastest method (other than mmapping) and is only limited by the disk speed. Parsing ASCII files is inherently slow (even in C) compared to binary data.]这以前是 在 R-Help 上询问,因此值得回顾。
一项建议是使用
readChar()
,然后使用strsplit()
和substr()
对结果进行字符串操作。可以看到readChar涉及的逻辑比read.table少很多。我不知道内存是否是一个问题,但您可能也想要查看 HadoopStreaming 包。 使用 Hadoop,这是一个专为处理大型数据集而设计的 MapReduce 框架。为此,您可以使用 hsTableReader 函数。这是一个示例(但它有一个学习 Hadoop 的学习曲线):
这里的基本思想是将数据导入分成块。您甚至可以使用并行框架之一(例如雪)并通过对文件进行分段来并行运行数据导入,但很可能对于大型数据集没有帮助,因为您将遇到内存限制,这就是为什么 Map-Reduce 是更好的方法。
This was previously asked on R-Help, so that's worth reviewing.
One suggestion there was to use
readChar()
and then do string manipulation on the result withstrsplit()
andsubstr()
. You can see the logic involved in readChar is much less than read.table.I don't know if memory is an issue here, but you might also want to take a look at the HadoopStreaming package. This uses Hadoop, which is a MapReduce framework designed for dealing with large data sets. For this, you would use the hsTableReader function. This is an example (but it has a learning curve to learn Hadoop):
The basic idea here is to break the data import into chunks. You could even go so far as to use one of the parallel frameworks (e.g. snow) and run the data import in parallel by segmenting the file, but most likely for large data sets that won't help since you will run into memory constraints, which is why map-reduce is a better approach.
另一种方法是使用
vroom
包。现在在 CRAN 上。vroom
不会加载整个文件,它会索引每个记录所在的位置,并在您稍后使用它时读取。请参阅vroom 简介、vroom 入门 和 vroom 基准。
基本概述是,初始读取大文件会快得多,而后续对数据的修改可能会稍微慢一些。因此,根据您的用途,它可能是最好的选择。
请参阅下面的 vroom 基准 中的简化示例,关键部分看到的是超快的读取时间,但稍微减少了聚合等操作。
An alternative is to use the
vroom
package. Now on CRAN.vroom
doesn't load the entire file, it indexes where each record is located, and is read later when you use it.See Introduction to vroom, Get started with vroom and the vroom benchmarks.
The basic overview is that the initial read of a huge file, will be much faster, and subsequent modifications to the data may be slightly slower. So depending on what your use is, it could be the best option.
See a simplified example from vroom benchmarks below, the key parts to see is the super fast read times, but slightly sower operations like aggregate etc..
我使用新的
arrow
包非常快速地读取数据。它似乎处于相当早期的阶段。具体来说,我使用的是 parquet 柱状格式。这会转换回 R 中的
data.frame
,但如果不这样做,您可以获得更大的加速。这种格式很方便,因为它也可以在 Python 中使用。我的主要用例是在相当受限的 RShiny 服务器上。由于这些原因,我更喜欢将数据附加到应用程序(即,在 SQL 之外),因此需要较小的文件大小和速度。
这篇链接的文章提供了基准测试和良好的概述。我在下面引用了一些有趣的观点。
https://ursalabs.org/blog/2019-10-columnar-perf/
文件大小
读取速度
独立测试
我对 1,000,000 行的模拟数据集执行了一些独立基准测试。基本上我洗牌了一堆东西来尝试挑战压缩。我还添加了一个由随机单词和两个模拟因素组成的短文本字段。
数据
读写 写入
数据很容易。
读取数据也很容易。
我测试了根据一些竞争选项读取这些数据,并且确实得到了与上面的文章略有不同的结果,这是预期的。
这个文件远没有基准文章那么大,所以也许这就是差异。
测试
as_data_frame = FALSE
)arrow
读取157.2 MB)feather
读取157.2 MB)观察
对于这个特定的文件,
fread
实际上非常快。我喜欢高度压缩的parquet2
测试中的小文件。如果我确实需要加速,我可能会花时间使用本机数据格式而不是data.frame
。这里
fst
也是一个不错的选择。我会使用高度压缩的fst
格式或高度压缩的parquet
,具体取决于我是否需要速度或文件大小权衡。I am reading data very quickly using the new
arrow
package. It appears to be in a fairly early stage.Specifically, I am using the parquet columnar format. This converts back to a
data.frame
in R, but you can get even deeper speedups if you do not. This format is convenient as it can be used from Python as well.My main use case for this is on a fairly restrained RShiny server. For these reasons, I prefer to keep data attached to the Apps (i.e., out of SQL), and therefore require small file size as well as speed.
This linked article provides benchmarking and a good overview. I have quoted some interesting points below.
https://ursalabs.org/blog/2019-10-columnar-perf/
File Size
Read Speed
Independent Test
I performed some independent benchmarking on a simulated dataset of 1,000,000 rows. Basically I shuffled a bunch of things around to attempt to challenge the compression. Also I added a short text field of random words and two simulated factors.
Data
Read and Write
Writing the data is easy.
Reading the data is also easy.
I tested reading this data against a few of the competing options, and did get slightly different results than with the article above, which is expected.
This file is nowhere near as large as the benchmark article, so maybe that is the difference.
Tests
as_data_frame = FALSE
)arrow
)feather
)Observations
For this particular file,
fread
is actually very fast. I like the small file size from the highly compressedparquet2
test. I may invest the time to work with the native data format rather than adata.frame
if I really need the speed up.Here
fst
is also a great choice. I would either use the highly compressedfst
format or the highly compressedparquet
depending on if I needed the speed or file size trade off.还有一点值得一提。如果您有一个非常大的文件,您可以使用(其中
bedGraph
是工作目录中文件的名称)即时计算行数(如果没有标题):然后您可以使用在
read.csv
、read.table
...A minor additional points worth mentioning. If you have a very large file you can on the fly calculate the number of rows (if no header) using (where
bedGraph
is the name of your file in your working directory):You can then use that either in
read.csv
,read.table
...很多时候,我认为将较大的数据库保留在一个数据库中(例如 Postgres)是一个很好的做法。我不会使用比 (nrow * ncol) ncell = 10M 更大的任何东西,这非常小;但我经常发现我只希望 R 仅在从多个数据库查询时创建和保存内存密集型图。在未来的 32 GB 笔记本电脑中,其中一些类型的内存问题将消失。但是,使用数据库来保存数据,然后使用 R 的内存来存储结果查询结果和图表的诱惑仍然可能有用。一些优点是:
(1) 数据保持加载在数据库中。当您重新打开笔记本电脑时,您只需在 pgadmin 中重新连接到所需的数据库即可。
(2) 确实,R 可以比 SQL 执行更多漂亮的统计和图形操作。但我认为 SQL 比 R 更适合查询大量数据。
Often times I think it is just good practice to keep larger databases inside a database (e.g. Postgres). I don't use anything too much larger than (nrow * ncol) ncell = 10M, which is pretty small; but I often find I want R to create and hold memory intensive graphs only while I query from multiple databases. In the future of 32 GB laptops, some of these types of memory problems will disappear. But the allure of using a database to hold the data and then using R's memory for the resulting query results and graphs still may be useful. Some advantages are:
(1) The data stays loaded in your database. You simply reconnect in pgadmin to the databases you want when you turn your laptop back on.
(2) It is true R can do many more nifty statistical and graphing operations than SQL. But I think SQL is better designed to query large amounts of data than R.
我想以最简单的形式贡献基于 Spark 的解决方案:
Spark 生成了相当不错的结果:
这是在具有 32GB 内存的 MacBook Pro 上进行测试的。
备注
Spark,通常不应该能够“战胜”速度优化的软件包。尽管如此,我想使用 Spark 提供一个答案:
我认为对于类似的问题,如果任务是处理 1e7 或更多行 Spark,则应该考虑。即使可以将这些数据“锤入”到单个
data.frame
中,但感觉还是不对。该对象可能难以使用,并且在部署模型等时会产生问题。I wanted to contribute Spark-based solution in the simplest form:
Spark generated fairly OK results:
This was tested on MacBook Pro with 32GB ram.
Remarks
Spark, usually shouldn't be able to "win" against packages optimised for speed. Nevertheless, I wanted to contribute an answer using Spark:
data.frame
may prove problematic later on, when other operations are attempted on that object and hit the performance envelope of architectureI think that for questions like that, where the task is to handle 1e7 or more rows Spark should be given considerations. Even if it may be possible to "hammer in" that data into a single
data.frame
it's just doesn't feel right. Likely that object will be difficult to work with and create problems when deploying models, etc.我觉得 fread 是一个更快的函数,而不是传统的 read.table。
指定其他属性(例如仅选择所需的列、指定 colclasses 和字符串作为因素)将减少导入文件所需的时间。
Instead of the conventional read.table I feel fread is a faster function.
Specifying additional attributes like select only the required columns, specifying colclasses and string as factors will reduce the time take to import the file.
我已经尝试了以上所有方法,[readr][1] 做得最好。我只有 8GB RAM
Loop 用于 20 个文件,每个文件 5GB,7 列:
I've tried all above and [readr][1] made the best job. I have only 8gb RAM
Loop for 20 files, 5gb each, 7 columns: