通过 Linux 远程发送对库的调用
我正在用 C 开发一些实验设置。
我正在探索如下场景,我需要帮助来理解它。
我有一个系统 A,其中有很多使用加密算法的应用程序。
但是这些加密调用(openssl 调用)应该发送到另一个负责加密的系统 B。
因此,我必须通过套接字将对加密(openssl)引擎的任何调用发送到具有 openssl 支持的远程系统(B)。
我的计划是在系统 A 上有一个小型套接字编程,它将这些调用转发到系统 B。
目前我还不清楚如何处理系统 B 接收到的命令。
我是否真的获取这些命令并将它们转换为在我的系统中本地对 openssl 的相应调用?这意味着我必须对系统 A 上所做的任何事情进行编程,对吗?
或者有没有办法直接将这些原始代码行隧道/发送到 openssl 库,然后刚刚收到结果,然后重新发送到系统 A
您认为我应该如何解决这个问题?
PS:哦,顺便说一下,对系统 A 上的密码学(如 EngineUpdate、VerifyFinal 等或 Digest)的调用可以在 Java 或 C 上进行。我已经编写了一个 Java/C 程序来通过套接字将这些命令发送到系统 B。 。 问题只出在系统 B 上以及我必须如何处理..
I am developing some experimental setup in C.
I am exploring a scenario as follows and I need help to understand it.
I have a system A which has a lot of Applications using cryptographic algorithms.
But these crypto calls(openssl calls) should be sent to another System B which takes care of cryptography.
Therefore, I have to send any calls to cryptographic (openssl) engines via socket to a remote system(B) which has openssl support.
My plan is to have a small socket prog on System A which forwards these calls to system B.
What I'm still unclear at this moment is how I handle the received commands at System B.
Do I actually get these commands and translate them into corresponding calls to openssl locally in my system? This means I have to program whatever is done on System A right?
Or is there a way to tunnel/send these raw lines of code to the openssl libs directly and just received the result and then resend to System A
How do you think I should go about the problem?
PS: Oh by the way, the calls to cryptography(like EngineUpdate, VerifyFinal etc or Digest on System A can be either on Java or C.. I already wrote a Java/C program to send these commands to System B via sockets...
The problem is only on System B and how I have to handle..
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您可以在 B 上使用套接字,但这意味着您需要为此定义一个协议。或者使用 RPC(远程过程调用)。
套接字编程的示例可以在此处找到。
此处解释了 RPC。
You could use sockets on B, but that means you need to define a protocol for that. Or you use RPC (remote procedure calls).
Examples for socket programming can be found here.
RPC is explained here.
我能想到的最简单(不是说“简单”,但仍然是)的方法是:
当然,这种方法存在很多问题:
更新:
根据评论中的要求,我会尝试扩展一下。我所说的“包装器”是指一个新的库,它具有与另一个库相同的 API,但实际上并不包含相同的代码。相反,包装器库将包含用于序列化参数、调用服务器、等待响应、反序列化结果并将它们呈现给调用程序的代码,就像什么都没发生一样。
由于这涉及大量乏味、重复且容易出错的代码,因此最好通过代码驱动来抽象它。最好的方法是使用原始库的头文件来定义所需的序列化,但这(当然)需要相当繁重的 C 解析。如果做不到这一点,您可以自下而上开始并创建自定义语言来描述调用,然后使用它来生成序列化、反序列化和代理代码。
在 Linux 系统上,您可以控制动态链接器,以便它加载代理库而不是“真实”库。当然,您也可以用代理替换(在磁盘上)真实的库,但是如果服务器不工作,这将破坏所有使用它的应用程序,这似乎非常危险。
The easiest (not to say "the easy", but still) way I can imagine would be to:
Of course, there are many many problems with this approach:
UPDATE:
As requested in a comment, I'll try to expand a bit. By "wrapper" I mean a new library, that has the same API as another one, but does not in fact contain the same code. Instead, the wrapper library will contain code to serialize the arguments, call the server, wait for a response, de-serialize the result(s), and present them to the calling program as if nothing happened.
Since this involves a lot of tedious, repetitive and error-prone code, it's probably best to abstract it by making it code-driven. The best would be to use the original library's header file to define the serialization needed, but that (of course) requires quite heavy C parsing. Failing that, you might start bottom-up and make a custom language to describe the calls, and then use that to generate the serialization, de-serialization, and proxy code.
On Linux systems, you can control the dynamic linker so that it loads your proxy library instead of the "real" library. You could of course also replace (on disk) the real library with the proxy, but that will break all applications that use it if the server is not working, which seems very risky.
因此,您基本上有两个选择,每个选择分别由 unwind 和 ammoQ 概述:
(1)自己编写服务器并执行套接字/协议工作等。您可以使用 Google 的协议缓冲区等解决方案来减轻一些痛苦。
(2) 使用现有的中间件解决方案,如 (a) 消息队列或 (b) RPC 机制(如 CORBA 及其许多替代方案),
两者的工作量可能比您预期的要多。所以你真的必须自己回答这个问题。你的项目有多认真?您的硬件有多多样化?未来硬件和软件配置发生变化的可能性有多大?
如果这不仅仅是一个学习或宠物项目,您在一两个月内就会感到无聊,那么现有的中间件解决方案可能是最佳选择。缺点是学习曲线有些令人生畏。
您可以使用 CORBA、ICE,或者当今的 Java 解决方案(RMI?EJB?),以及其他一些解决方案。这是一个优雅的解决方案,因为您对远程加密机的调用在您的 SystemA 中显示为简单的函数调用,并且中间件处理数据问题和套接字。但你不可能在一个周末就学会它们。
就我个人而言,我会先看看像 AMQP 这样的消息队列解决方案是否适合您。与 RPC 相比,学习曲线更短。
So you basically have two choices, each outlined by unwind and ammoQ respectively:
(1) Write a server and do the socket/protocol work etc., yourself. You can minimize some of the pain by using solutions like Google's protocol buffers.
(2) use an existing middleware solution like (a) message queues or (b) an RPC mechanism like CORBA and its many alternatives
Either is probably more work than you anticipated. So really you have to answer this yourself. How serious is your project? How varied is your hardware? How likely is the hardware and software configuration to change in the future?
If this is more than a learning or pet project you are going to be bored with in a month or two then an existing middleware solution is probably the way to go. The downside is there is a somewhat intimidating learning curve.
You can go the RPC route with CORBA, ICE, or whatever the Java solutions are these days (RMI? EJB?), and a bunch of others. This is an elegant solution since your calls to the remote encryption machine appear to your SystemA as simple function calls and the middleware handles the data issues and sockets. But you aren't going to learn them in a weekend.
Personally I would look to see if a message queue solution like AMQP would work for you first. There is less of a learning curve than RPC.