ksh脚本优化
我有一个小脚本,它只需读取文件的每一行,检索 id 字段,运行实用程序来获取名称并将名称附加在末尾。问题是输入文件很大(2GB)。由于输出与附加了 10-30 个字符名称的输入相同,因此具有相同的数量级。如何优化它以读取大缓冲区,在缓冲区中进行处理,然后将缓冲区写入文件,从而最大限度地减少文件访问次数?
#!/bin/ksh
while read line
do
id=`echo ${line}|cut -d',' -f 3`
NAME=$(id2name ${id} | cut -d':' -f 4)
if [[ $? -ne 0 ]]; then
NAME="ERROR"
echo "Error getting name from id2name for id: ${id}"
fi
echo "${line},\"${NAME}\"" >> ${MYFILE}
done < ${MYFILE}.csv
谢谢
I have a small script that simply reads each line of a file, retrieves id field, runs utility to get the name and appends the name at the end. The problem is the input file is huge (2GB). Since output is same as input with a 10-30 char name appended, it is of the same order of magnitude. How can I optimize it to read large buffers, process in buffers and then write buffers to the file so the number of file accesses are minimized?
#!/bin/ksh
while read line
do
id=`echo ${line}|cut -d',' -f 3`
NAME=$(id2name ${id} | cut -d':' -f 4)
if [[ $? -ne 0 ]]; then
NAME="ERROR"
echo "Error getting name from id2name for id: ${id}"
fi
echo "${line},\"${NAME}\"" >> ${MYFILE}
done < ${MYFILE}.csv
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
通过消除循环每次迭代中对
cut
的两次调用,您可以大大加快速度。将输出文件的重定向移至循环末尾也可能会更快。由于您没有显示输入行的示例,或者 id2name 的组成(可能是瓶颈)或其输出是什么样的,所以我只能提供这个近似值:操作系统会做为您提供缓冲。
编辑:
如果您的 ksh 版本没有
<<<
,请尝试以下操作:(如果您使用的是 Bash,则此操作不起作用。)
You can speed things up considerably by eliminating the two calls to
cut
in each iteration of the loop. It also might be faster to move the redirection to your output file to the end of the loop. Since you don't show an example of an input line, or whatid2name
consists of (it's possible it's a bottleneck) or what its output looks like, I can only offer this approximation:The OS will do the buffering for you.
Edit:
If your version of ksh doesn't have
<<<
, try this:(If you were using Bash, this wouldn't work.)