使用C++的Boost Asio异步连接问题在Windows中
使用适用于 Windows 32(XP 品牌)的 MS Visual Studio 2008 C++,我尝试构建一个通过无模式对话框管理的 POP3 客户端。
第一步是在对话框过程的 WM_INITDIALOG 消息中创建一个持久对象 - 比如说 pop3 - 使用所有 Boost.asio 的东西来执行异步连接。有些像:
case WM_INITDIALOG:
return (iniPop3Dlg (hDlg, lParam));
这里我们假设 iniPop3Dlg() 创建 pop3 堆对象 - 比如 pop3p 指出的 -。然后与远程服务器连接,并使用客户端的 ID 和密码(USER 和 PASS 命令)启动会话。这里我们假设服务器处于TRANSACTION状态。
然后,为了响应某些用户输入,对话框过程调用适当的函数。说:
case IDS_TOTAL: // get how many emails in the server
total (pop3p);
return FALSE;
case IDS_DETAIL: // get date, sender and subject for each email in the server
detail (pop3p);
return FALSE;
注意,total()使用POP3的STAT命令来获取服务器中有多少封电子邮件,而detail()连续使用两个命令;首先使用 STAT 获取总数,然后使用 GET 命令循环来检索每条消息的内容。
顺便说一句:detail() 和 total() 共享相同的子例程 - STAT 处理例程 - 并且在完成后,都按原样离开会话。即不关闭连接;套接字在服务器处于 TRANSACTION 状态时保持打开状态。
当第一次选择任何选项时,事情会按预期运行,获得所需的结果。但当第二次机会时,连接就挂了。
仔细检查发现,第一次
socket_.get_io_service().run();
使用该语句时,永远不会结束。
请注意,所有异步写入和读取例程都使用相同的 io_service,并且每个例程在任何 run()
之前都使用 socket_.get_io_service().reset()
R/W 操作也使用相同的计时器,在每个操作完成后重置为零等待:
dTimer_.expires_from_now (boost::posix_time::seconds(0));
我怀疑问题出在 io_service 或计时器中,并且后续执行发生在例程的不同负载中。
作为解决我的问题的第一个方法,我希望有人能在更详细地阐述所涉及的(非常少且简单的)例程之前,对它进行一些说明。
Using MS Visual Studio 2008 C++ for Windows 32 (XP brand), I try to construct a POP3 client managed from a modeless dialog box.
Te first step is create a persistent object -say pop3- with all that Boost.asio stuff to do asynchronous connections, in the WM_INITDIALOG message of the dialog-box-procedure. Some like:
case WM_INITDIALOG:
return (iniPop3Dlg (hDlg, lParam));
Here we assume that iniPop3Dlg() create the pop3 heap object -say pointed out by pop3p-. Then connect with the remote server, and a session is initiated with the client’s id and password (USER and PASS commands). Here we assume that the server is in TRANSACTION state.
Then, in response to some user input, the dialog-box-procedure, call the appropriate function. Say:
case IDS_TOTAL: // get how many emails in the server
total (pop3p);
return FALSE;
case IDS_DETAIL: // get date, sender and subject for each email in the server
detail (pop3p);
return FALSE;
Note that total() uses the POP3’s STAT command to get how many emails in the server, while detail() uses two commands consecutively; first STAT to get the total and then a loop with the GET command to retrieve the content of each message.
As an aside: detail() and total() share the same subroutines -the STAT handle routine-, and when finished, both leaves the session as-is. That is, without closing the connection; the socket remains opened an the server in TRANSACTION state.
When any option is selected by the first time, the things run as expected, obtaining the desired results. But when making the second chance, the connection hangs.
A closer inspection show that the first time that the statement
socket_.get_io_service().run();
Is used, never ends.
Note that all asynchronous write and read routines uses the same io_service, and each routine uses socket_.get_io_service().reset()
prior to any run()
Not also that all R/W operations also uses the same timer, who is reseted to zero wait after each operation is completed:
dTimer_.expires_from_now (boost::posix_time::seconds(0));
I suspect that the problem is in the io_service or in the timer, and the fact that subsequent executions occurs in a different load of the routine.
As a first approach to my problem, I hope that someone would bring some light in it, prior to a more detailed exposition of the -very few and simple- routines involved.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您是否看过 asio 示例并进行了研究他们?有几个异步示例可以帮助您理解基本控制流程。特别重要的是通过调用
io_service::run
启动的主事件循环,重要的是要了解控制不会返回到调用者,直到io_service< /code> 没有更多剩余的工作要做。
Have you looked at the asio examples and studied them? There are several asynchronous examples that should help you understand the basic control flow. Pay particular importance to the main event loop started by invoking
io_service::run
, it's important to understand control is not expected to return to the caller until theio_service
has no more remaining work to do.