请求有关进行 IPC/事件捕获的建议
我有一个简单的 python 服务器脚本,它分叉了 C++ 程序的多个实例(比如 N)。 C++程序会生成一些需要捕获的事件。
当前正在日志文件中捕获事件(每个分叉进程 1 个日志文件)。此外,我需要定期(T 分钟)获取所有子进程向 python 服务器或侦听这些事件的其他程序(仍不确定)生成事件的速率。根据这些事件的发生率,服务器可能会采取一些“重新操作”(例如减少分叉实例的数量)
我简要查看了一些指针:
- grep 日志文件 - 浏览正在运行的进程日志文件(.running ),过滤过去 T 分钟内生成的条目,分析数据并报告
- 套接字 ipc - 将代码添加到 c++ 程序中,将事件发送到某个服务器程序,该服务器程序在 T 分钟后分析数据,报告并重新启动
- redis/memcache (不完全确定)-添加代码C++ 程序使用一些分布式存储来捕获所有生成的数据,在 T 分钟后分析数据,报告并重新开始
请让我知道您的建议。
谢谢
I have a simple python server script which forks off multiple instances (say N) of C++ program. The C++ program generates some events that need to be captured.
The events are currently being captured in a log file (1 logfile per forked process). In addition, i need to periodically (T minutes) get the rate at which the events are being produced across all child processes to either the python server or some other program listening for these events (still not sure). Based on rate of these events, some "re-action" may be taken by the server (say reduce the number of forked instances)
Some pointers i have briefly looked at:
- grep log files - go through the running process log files (.running), filter those entries generated in the last T minutes, analyse the data and report
- socket ipc - add code to c++ program to send the events to some server program which analyses the data after T minutes, reports and starts all over again
- redis/memcache (not sure completely) - add code to c++ program to use some distributed store to capture all the generated data, analyses the data after T minutes, reports and starts all over again
Please let me know your suggestions.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果时间不是最重要的(与启动的 C++ 程序中发生的任何事件相比,T 分钟听起来很长),那么就不要让事情变得比需要的更复杂。忘记 IPC(套接字、共享内存等),只需让每个 C++ 程序记录您需要了解的有关时间/性能的信息,并让 python 脚本每 T 分钟检查一次您需要数据的日志。不要浪费时间让你可以用简单的方式完成的事情变得过于复杂
if time is not of the essence (T minutes sounds like it is long compared to whatever events are happening in the C++ programs that are kicked off) then dont make things any more complicated than they need to be. forget IPC (sockets, shared mem, etc), just have each C++ program log what you need to know about time/performance and let the python script check logs every T minutes that you need the data. dont waste time overcomplicating something that you can do in a simple manner
作为套接字 IPC 建议的替代方案,0mq 怎么样。它是一个库(C 语言,可使用 python 绑定),可以在线程间、进程间或机器间级别上进行消息传输。上手非常简单,而且速度很快。
我与它无关。我只是评估它的其他用途,并认为它可能也适合您。
As a alternative to your socket IPC suggestion, how about 0mq. It's a library (in C with python bindings available) that can do message transfer on an inter-thread, inter-process or inter-machine level. Pretty simple to get going, and pretty quick.
I'm not affiliated with it. I'm just evaluating it for other uses and thought it might be a fit for you as well.