在压缩存档内的文本文件上运行“head”,而不解压存档
您好,
我已经接手了之前的团队并编写了处理 csv 文件的 ETL 作业。我在 ubuntu 上结合使用 shell 脚本和 perl。 csv 文件很大;它们以压缩档案形式到达。解压后,许多都超过 30Gb - 是的,这是一个 G
Legacy 进程,是一个在 cron 上运行的批处理作业,它完全解压每个文件,读取第一行并将其复制到配置文件中,然后重新压缩整个文件。有时,这需要花费许多小时的处理时间,但没有任何好处。
您能否建议一种方法,仅从压缩存档内的每个文件中提取第一行(或前几行),而不完全解压存档?
Greetings,
I've taken over from a prior team and writing ETL jobs which process csv files. I use a combination of shell scripts and perl on ubuntu. The csv files are huge; they arrive as zipped archives. Unzipped, many are more than 30Gb - yes, that's a G
Legacy process is a batch job running on cron that unzips each file entirely, reads and copies the first line of it into a config file, then re-zips the entire file. Some days this takes many many hours of processing time, for no benefit.
Can you suggest a method to only extract the first line (or first few lines) from each file inside a zipped archive, without fully unpacking the archives?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
unzip
命令行实用程序有一个-p
选项将文件转储到标准输出。只需将其输入head
即可,无需提取整个文件到磁盘。或者,从
perldoc IO::Compress::Zip
:修改以适应,即通过迭代文件列表
$zip->memberNames()
,并且仅读取前几行。The
unzip
command line utility has a-p
option which dumps a file to standard out. Just pipe that intohead
and it'll not bother extracting the whole file to disk.Alternatively, from
perldoc IO::Compress::Zip
:Modify to suit, i.e. by iterating over the file list
$zip->memberNames()
, and only reading the first few lines.Python 的
zipfile.ZipFile
允许您访问通过ZipFile.open()
将文件归档为流。从那里您可以根据需要处理它们。Python's
zipfile.ZipFile
allows you to access archived files as streams viaZipFile.open()
. From there you can process them as necessary.