格式化组织和过滤文本文件中的数据
我正在查看一堆文件夹中的一堆文本文件。我想逐行浏览每个文件并进行一些基本统计,例如获取时间戳并计算重复值。有人可以推荐任何工具或脚本解决方案来执行此操作吗?
另一种可能性是拥有一个脚本/工具,可以解析这些文件并将它们添加到 SQLite 或 Access 等数据库中,以便于过滤。
到目前为止,我尝试使用 AIR,但看起来可能有太多数据需要它处理,并且它挂起,但这可能是因为一些低效的过滤。
I'm looking to go through a bunch of text files in a bunch of folders. I'd like to go through each file line by line and do some basic statistics, like grabbing time stamp and count repeating values. Is there any tool or scripting solution that someone could recommend for doing this?
Another possibility is to have a script/tool that could just parse these files and add them to a database like sqlite or access, for easy filtering.
So far I tried using AIR, but it looks like there might be too much data for it to process, and it hangs, but that could be because of some inefficient filtering.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我曾使用 QuickMacros 来完成类似的事情。它可以对文本文件执行任何操作(在 7 个州中有些是非法的),还可以连接到数据库并执行 SQL 任务,例如创建和修改表等。
我经常使用它来提取数据、解析数据,然后将其加载到另一个文件中数据库。对于计划任务特别有用。
这是网站
I have used QuickMacros for things like this. It can do just about anyting to a textfile (some illegal in 7 states) as well as connect to databases and perform sql tasks like create and modify tables etc.
I routinely used it to extract data, parse it, and then load it into another database. Especially useful with Scheduled Tasks.
Here's the website
我推荐 Perl 和 CPAN
I recommend Perl and CPAN