阻止取消 Boost.Asio 中挂起的异步操作
我有一个从 boost::io_service
接收回调的对象,由于某些原因,我无法通过共享指针发布回调(是的,我知道这是官方的处理它的方法),所以我用原始指针绑定处理程序。假设在这种情况下这是一个固定的要求。
现在,如果我删除该对象,它当然仍然会收到未完成的套接字操作的回调,并带有“操作已中止”错误代码。
问题:当我删除一个对象及其拥有的 Asio 对象(套接字、定时器)时,是否有办法强制同步完成所有操作?
I have an object that receives callbacks from a boost::io_service
, and for a few reasons I cannot post the callback via a shared pointer (yes, I know it's the official way to handle it), so I bind the handler with a raw pointer. Assume it's a fixed requirement in this situation.
Now, if I delete the object, it will, of course, still receive a callback on an outstanding socket operation with "operation aborted" error code.
The question: is there a way to force a synchronous completion of all operations when I delete an object together with its owned Asio objects (sockets, timers)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
你不能;此时信息已丢失。您甚至无法比较函数对象的相等性,更不用说查看内部并比较某些指针,然后决定要做什么。
那么,问题是:为什么不能使用共享指针?
实现的方法是使用共享指针和弱指针。如果你不想使用共享指针和弱指针,你可以自己实现底层机制。但通常使用库实现更可靠。
因此,在回调中使用弱指针,让回调采用weak_ptr作为参数,调用wp.lock(),检查它,如果它仍然有效,则取消引用它。仍然存在一个竞争条件,您清除主共享指针和另一个调用 wp.lock() 的线程(假设您有多个线程),但您可以通过使用对象中的标志来解决此问题。
更新回复评论:
Asio 并不强迫您使用shared_ptr/weak_ptr 组合。您可以自由地构建自己的解决方案,但必须解决相同的问题。
假设您不能使用weak_ptr,一旦您确定没有其他东西会使用该指针,您应该删除该对象。原则上,您有两种基本方法可以执行此操作:
使用一些附加数据结构检测对象是否已被删除。这是shared_ptr/weak_ptr 在内部所做的事情,您可以自由地构建自己的等效项。
等待一切完成,然后删除该对象。不需要使用shared_ptr/weak_ptr,但您需要以某种方式进行记账。
在这些情况下,您最终只能手动或使用库来跟踪未完成的内容。基本任务是相同的,但您不必被迫使用库。你被迫解决这个普遍问题。
您所要求的方法,同步“取消”每个未完成的操作,以便您可以安全地删除对象,可以减少到其中一种情况。
考虑一下:
如果另一个线程中正在进行对 Obj::io_done() 的调用,那么对 cancel_outstanding_operations() 的调用会做什么?是等待它返回,还是因为I/O操作完成就立即返回?在“立即返回”的情况下,“delete o”语句并不安全。在“等待它返回”的情况下,您有上面的“等待一切完成”的情况,除了您增加了一堆实现复杂性并且您必须同步等待。
You can't; the information has been lost at that point. You can't even compare the function objects for equality, let alone peek inside and compare some pointer and then decide what to do.
So, the question is: why can't you use a shared pointer?
The way to do it is to use shared pointers and weak pointers. If you don't want to use shared pointers and weak pointers, you can implement the underlying mechanisms yourself. But it is generally more reliable just to use the library implementations.
So, use a weak pointer in the callback, have the callback take a weak_ptr as an argument, call wp.lock(), check it, and if it is still valid, then dereference it. There will still be a race condition where you clear the main shared_ptr and another thread calling wp.lock() (assuming you have multiple threads), but you can resolve this by using a flag in the object.
Update with response to comment:
Asio is not forcing you to use a shared_ptr/weak_ptr combination. You are free to build your own solution, but you have to address the same issues.
Assuming you can't use a weak_ptr, you should delete the object once you are sure nothing else will use the pointer. In principle you have two basic ways of doing this:
Detecting that the object has been deleted by using some additional data structure. This is what shared_ptr/weak_ptr do internally, and you are free to build your own equivalent.
Wait for everything to complete and then delete the object. No requirement to use shared_ptr/weak_ptr, but you need to do the book keeping somehow.
In these cases you end up keeping track of what's outstanding either by hand or using a library. The basic task is the same, but you're not forced to use a library. You are forced to solve that general problem.
The approach you are asking for, synchronously "cancelling" every outstanding operation so that you can safely delete an object, reduces to one of these cases.
Consider:
What does the call to cancel_outstanding_operations() do if the call to Obj::io_done() is in progress in another thread? Does it wait for it to return, or does it return immediately because the I/O operation is complete? In the "return immediately" case, the "delete o" statement is not safe. In the "wait for it to return" case, you have the "wait for everything to complete" case above, except that you've added a bunch of implementation complexity and you have to do the wait synchronously.