在同步 GCD 队列上放置一个块是否会锁定该块并暂停其他块?
我读到应该使用 GCD 同步队列(dispatch_sync)来实现代码的关键部分。一个例子是从账户余额中减去交易金额的块。同步调用的有趣部分是一个问题,这如何影响多个线程上其他块的工作?
让我们想象一下这样的情况:有 3 个线程以异步模式使用和执行主队列和自定义队列中的系统块和用户定义块。这些块都按某种顺序并行执行。现在,如果一个块被放入同步模式的自定义队列中,这是否意味着所有其他块(包括其他线程上的块)都将被挂起,直到该块成功执行为止?或者这是否意味着只有一些锁会被放置在该块上,而其他块仍然会执行。但是,如果其他块使用与同步块相同的数据,则其他块将不可避免地等待该锁被释放。
恕我直言,无论是一个核心还是多个核心,同步模式都应该冻结整个应用程序的工作。然而,这些只是我的想法,所以请对此发表评论并分享您的见解:)
I read that GCD synchronous queues (dispatch_sync) should be used to implement critical sections of code. An example would be a block that subtracts transaction amount from account balance. The interesting part of sync calls is a question, how does that affect the work of other blocks on multiple threads?
Lets imagine the situation where there are 3 threads that use and execute both system and user defined blocks from main and custom queues in asynchronous mode. Those block are all executed in parallel in some order. Now, if a block is put on a custom queue with sync mode, does that mean that all other blocks (including on other threads) are suspended until the successful execution of the block? Or does that mean that only some lock will be put on that block while other will still execute. However, if other blocks use the same data as the sync block then it's inevitable that other blocks will wait until that lock will be released.
IMHO it doesn't matter, is it one or multiple cores, sync mode should freeze the whole app work. However, these are just my thoughts so please comment on that and share your insights :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
同步调度会暂停代码的执行,直到调度的块完成。异步调度立即返回,块相对于调用代码异步执行:
调度队列有两种,串行和并发。串行块严格按照块添加的顺序一一地调度。当一个完成后,另一个开始。这种执行只需要一个线程。并发队列同时、并行地调度块。那里使用了更多线程。
您可以根据需要混合和匹配同步/异步调度和串行/并发队列。如果您想使用 GCD 来保护对关键部分的访问,请使用单个串行队列并分派对此队列上的共享数据的所有操作(同步或异步,无关紧要)。这样一来,始终只有一个块在操作共享数据:
现在,如果
guardingQueue
是一个串行队列,则添加/删除操作永远不会发生冲突,即使addFoo:
和removeFoo:
方法是从不同线程同时调用的。Synchronous dispatch suspends the execution of your code until the dispatched block has finished. Asynchronous dispatch returns immediately, the block is executed asynchronously with regard to the calling code:
And there are two kinds of dispatch queues, serial and concurrent. The serial ones dispatch the blocks strictly one by one in the order they are being added. When one finishes, another one starts. There is only one thread needed for this kind of execution. The concurrent queues dispatch the blocks concurrently, in parallel. There are more threads being used there.
You can mix and match sync/async dispatch and serial/concurrent queues as you see fit. If you want to use GCD to guard access to a critical section, use a single serial queue and dispatch all operations on the shared data on this queue (synchronously or asynchronously, does not matter). That way there will always be just one block operating with the shared data:
Now if
guardingQueue
is a serial queue, the add/remove operations can never clash even if theaddFoo:
andremoveFoo:
methods are called concurrently from different threads.不,没有。
同步部分是将块放入队列中,但控制权不会传回调用函数,直到块返回。
GCD 的许多用途都是异步的;您将一个块放入队列中,而不是等待该块完成,它将工作控制权传递回调用函数。
这对其他队列没有影响。
No it doesn't.
The synchronised part is that the block is put on a queue but control does not pass back to the calling function until the block returns.
Many uses of GCD are asynchronous; you put a block on a queue and rather than waiting for the block to complete it's work control is passed back to the calling function.
This has no effect on other queues.
如果需要序列化对某个资源的访问那么至少有两个
您可以使用的机制。如果您有一个帐户对象(这是唯一的
对于给定的帐号),那么您可以执行以下操作:
如果您没有对象,但正在使用只有一个对象的 C 结构
对于给定的帐号来说,如果有这样的结构,那么您可以执行以下操作:
这样,无论队列的并发级别如何,都可以保证在任何给定时间只有一个线程访问帐户。
同步对象的类型还有很多,但这两种都易于使用并且
相当灵活。
If you need to serialize the access to a certain resource then there are at least two
mechanisms that are accessible to you. If you have an account object (that is unique
for a given account number), then you can do something like:
If you don't have an object but are using a C structure for which there is only one
such structure for a given account number then you can do the following:
With this, regardless of the level of concurrency of your queues, you are guaranteed that only one thread will access an account at any given time.
There are many more types of synchronization objects but these two are easy to use and
quite flexible.