如何确定 *usrq中的最大消息数量
需要创建最大尺寸的 *USRQ。 根据文档, *USRQ的最大尺寸为2GB。 创建队列需要指定最大消息大小,队列中的初始消息数,队列扩展的大小(以消息)以及最大扩展操作数量。 让我们的消息大小为1024字节。初始数字是128条消息。队列将通过128条消息扩展。 估计消息的最大数量-2GB / 128字节。然后,我们减去消息的初始数量(128),除以扩展名的大小-128。结果,我们获得了扩展的最大数量-16,383。 我们将这些参数传递给quscrtuq,之后我们查看获得的内容(呼叫Matqat)。 我们看到,设置的最大扩展数(mat_template.max_extend)的设置小于请求的一个-15 306,因此队列中的最大消息数为1 959 296 然后开始填充队列,并在某个时候获得“尝试大于存储限制”的错误,同时,队列中的消息数为1,957,504,所使用的扩展次数为15,282。 为什么会发生这种情况以及如何在创建队列时正确估计最大增量数?
It is required to create *USRQ of the largest possible size.
According to the documentation, the maximum size for *USRQ is 2Gb.
Creating a queue requires specifying the maximum message size, the initial number of messages in the queue, the size (in messages) of the queue expansion, and the maximum number of expansion operations.
Let's the message size is 1024 bytes. The initial number is 128 messages. The queue will expand by 128 messages.
Estimate the maximum possible number of messages - 2Gb / 128 bytes. Then we subtract the initial number of messages (128) and divide by the size of the extension - 128. As a result, we get the maximum number of extensions - 16,383.
We pass these parameters to QUSCRTUQ, after which we look at what we got (call matqat).
We see that the maximum number of extensions (mat_template.Max_Extend) is set less than the requested one - 15 306, and the maximum number of messages in the queue is therefore is 1 959 296
Then start filling the queue and at some point get the error "Tried to go larger than storage limit" At the same time, the number of messages in the queue is 1,957,504, the number of extensions used is 15,282.
Why does this happen and how to correctly estimate the maximum number of increments when creating a queue?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
考虑一下(并且必须)有一些内部的“高架”,以使队列保持在适当的LIFO或FIFO订单中链接在一起的所有重新连接的消息,等等。这些内部的“链接列表”或“ Pointers”不是“免费” ”。
创建一个小的测试 *USRQ,然后进行其中的DMPOBJ。然后在队列中添加一些消息,然后再次添加DMPOBJ。然后“删除”一些消息,然后执行另一个DMPOBJ。然后比较这些转储的线轴文件以查看正在发生的事情。
Consider that there is (and must be) some internal "overhead" for a queue to keep all of the enqueued messages chained together in the proper LIFO or FIFO order, etc. These internal "linked lists" or "pointers" are not "free".
Create a small test *USRQ, and then do a DMPOBJ of that. Then add a few messages to the queue, and DMPOBJ of that again. Then "de-queue" a few messages, and do another DMPOBJ. Then compare the spool files of these dumps to see what is going on.
我认为,通过更多的实验,您会发现IBM我总是在页面中分配内存。它可能正在浏览每个消息的分配过程,并从上一个内存分配中分配 *USRQ上的空间,直到分配的内存已满,然后分配一个足够大的块以包含下一个分配。
让我给你一个简单的例子。 DataArea可以达到2000个字节,但是无论您制造的尺寸多大,它总是显示8K字节的物理尺寸。在内部,它知道可以使用8K空间。
那怎么了?
因此,这里有一些分配计划。 *USRQ例程正在基于 *USRQ参数扩展对象的逻辑大小,并且OS根据 *USRQ例程的要求,在页面大小块中分配物理内存。因此, *USRQ扩展不一定需要物理内存分配。它还基于设置队列的初始尺寸,以使其至少填充16K,并设置扩展名的大小至少填充至少4K。这样,API的频率不超过必要的频率。
I think with more experimentation you will find that IBM i always allocates memory in pages. It is probably going through the allocation procedure for each message and allocating space on the *USRQ from the previous memory allocation until the allocated memory is full, then it allocates another block large enough to contain the next allocation.
Let me give you a simple example. A dataarea can be up to 2000 bytes, but no matter what size you make it, it always shows a physical size of 8K bytes. Internally, it knows how much of that 8K space can be used.
So what is happening?
So there are a couple allocation schemes going on here. The *USRQ routines are extending the logical size of the object based on the *USRQ parameters, and the OS is allocating physical memory in page size blocks as requested by the *USRQ routines. Thus a *USRQ extension does not necessarily require a physical memory allocation. It also points out a possible optimization based on setting the initial size of the queue such that it fills up at least 16K, and also setting the size of the extension such that it fills up at least 4K. That way the API is not running through the extension logic more often than necessary.
这并没有真正直接回答您的问题,但是我认为我的评论不起作用,所以我会尝试答案。
换句话说,您在quscrtuq中指定的扩展名只是一个请求。如果超出限制,系统将在达到该数字之前阻止您。
如 Mark的答案,将有一些2GB的开销令人惊讶的是,Matqat报告的扩展名低于您要求的扩展。
是令人惊讶的(无论如何对我来说)是,当您实际将数据添加到队列中时,您甚至可能不会获得Matqat报告的扩展名。因此,获得准确号码的唯一方法似乎是在添加数据,直到您收到错误消息为止。
鉴于这一点,我要看的接下来是可能的用户输入。他们可以输入任何东西吗?还是他们必须在有限的值集中进行选择?如果用户输入的可能组合数量是“可管理的”,则可以编写一个尝试所有这些脚本的脚本。使用巨大的价值来用于扩展数量(故意要求超过2GB),并监视错误消息以捕获您真正获得的扩展次数。如果您可以为每个可能的用户输入执行此操作,则可以为扩展次数制作一个查找表,并实际使用它而不是进行算术。
如果可能的用户输入太多,那么您只需尝试一个代表性示例即可。而不是查找表,而是做算术,但基于样品中遇到的最小有效尺寸(甚至更小,以提供安全余量)。
This doesn't really answer your question directly, but I don't think my comments are working, so I'll try an answer.
The documentation says
In other words, the number of extensions you specify in QUSCRTUQ is just a request; the system will stop you before you reach that number if the limit would be exceeded.
As pointed out in Mark's answer, there is going to be overhead which uses up some of that 2GB, so it shouldn't be surprising that MATQAT reports a number of extensions lower than what you requested.
What is surprising (to me, anyway) is that when you actually add data to the queue, you might not even get the number of extensions reported by MATQAT. So the only way to get an accurate number seems to be adding data until you get the error message.
Given that, the next thing I would look at is what the possible user inputs are. Can they enter literally anything? Or do they have to choose among a limited set of values? If the number of possible combinations of user input is "manageable" then you could write a script which tries all of them. Use a huge value for the number of extensions (deliberately requesting more than 2GB) and monitor for the error message to capture the number of extensions you really got. If you can do this for every possible user input, then you can make a lookup table for the number of extensions, and actually use it instead of doing arithmetic.
If there are too many possible user inputs, then you just have to try a representative sample. And instead of a lookup table, do arithmetic, but based on the smallest effective size encountered in the sample (or even smaller, to provide a margin of safety).
首先,我要感谢所有发表评论和回答的人 - 这有助于找到解决方案。
特别感谢@markSwaterbury-您的答案对于找到找到解决方案的正确方法非常有帮助。
我们设法理解和看到的东西。
为实验选择了以下参数:
。每次添加新消息时,队列的大小都应增加。
创建队列后,其大小为16kb。这与“相关空间”的大小相同(可以通过矩阵获得)。
不管消息的大小和队列增量的大小(增量中的消息数),队列的物理大小并非每次都会增加,而是由于缺乏容量和1页的倍数(matmdata说页面大小为4KB)。
在我们的情况下,前26条消息不会导致队列的物理大小改变。第27位导致其增加4KB。此外,直到第55条消息之前,队列的物理大小不会改变。第56位将其增加了4KB。然后,直到第86位没有变化,第87位再次将物理尺寸增加4KB ...
所有这些都使我们能够评估所谓的Mark S Waterbury“内部开销”。
另外,应该注意的是,队列中消息的物理大小由用户指定的最大消息大小和消息头的大小 - 消息属性 - 消息属性填充了Matqmsg和包含:
对于键入的队列,将密钥大小添加到消息的物理大小中。
使用提出的方法,我们可以大致估计上面提到的“内部开销”,即64个字节(也许要少一些,因为由于排队的物理大小的页面增加,因此确切的值每次都不同,但是通过16的对齐方式让它像这样)。
总共,我们将获得一个额外的80个字节,即用户指定的最大消息大小和密钥大小的总和(对于钥匙队列,对于FIFO和LIFO队列,密钥大小为0)。
现在,如果我们根据指定的最大队列大小(2GB,文档中指示)计算最大增量数,并且消息的物理大小(如上所述定义),我们得到了正确的值,我们将其传递给quscrtuq。如果我们然后调用MatQat并计算队列中消息的最大大小,则使用返回到其初始消息的值,其他消息数量和队列扩展的数量,我们获得了最大消息的实际值队列,不抛出异常1C04“对象存储限制”(MCH2804)。
那。现在,您可以通过消息中的消息数量和计算出的最大消息数来控制填充队列的程度,而不必担心异常。
这一切是为了什么?如果我们有 *dtaq,一切都更简单...
问题是,如果我们不需要 *dtaq的所有功能(日记,将内容保存到磁盘,使用远程队列...),并且我们只能与本地队列一起工作, *USRQ的速度更快4-5倍,并且会消费同一时间减少了CPU资源。
Firstly, I want to say thanks to everyone who commented and answered - it helped a lot to find a solution.
Special thanks to @MarkSWaterbury - your answer was very helpful in finding the right approach in finding a solution.
What we managed to understand and see.
The following parameters were chosen for the experiment:
That. each time a new message is added, the size of the queue should increase.
After the queue is created, its size is 16Kb. This is the same as the size of the "associated space" (which can be obtained via MATMDATA).
Regardless of the size of the message and the size of the queue increment (the number of messages in the increment), the physical size of the queue does not increase every time, but due to lack of capacity and a multiple of 1 page (MATMDATA says that the page size is 4Kb).
In our case, the first 26 messages do not cause a change in the physical size of the queue. The 27th leads to its increase by 4Kb. Further, until the 55th message, the physical size of the queue does not change. The 56th increases it by another 4Kb. Then up to the 86th without changes, the 87th again increases the physical size by 4Kb...
All this allows us to evaluate the alleged Mark S Waterbury "internal overhead".
Here, also, it should be noted that the physical size of the message in the queue consists of the user-specified maximum message size and the size of the message header - Message attributes in a structure filled with MATQMSG and containing:
For a KEYED queue, the key size is added to the physical size of the message.
Using the proposed method, we can roughly estimate the "internal overhead" mentioned above, which is 64 bytes (perhaps a little less, because due to the page increment of the physical size of the queue, the exact value is different each time, but with alignment by 16 let it be like this).
In total, we get an additional 80 bytes to the sum of the maximum message size specified by the user and the key size (for KEYED queues, for FIFO and LIFO queues, the key size is 0).
Now, if we count the maximum number of increments, based on the specified maximum queue size (2GB, as indicated in the documentation) and the physical size of the message, defined as described above, we get the correct value, which we pass to QUSCRTUQ. If we then call MATQAT and calculate back the maximum size of messages in the queue, using the values returned to it Initial number of messages, Additional number of messages and Number of queue extensions, we get the real value of the maximum number of messages in the queue, which does not throw exception 1C04 "Object Storage Limit Exceeded" (MCH2804).
That. now you can control the degree of filling the queue by the ratio of the number of messages in it and the calculated maximum number of messages without fear of an exception.
And what is all this for? If we have *DTAQ with which everything is much simpler...
The thing is, if we don't need all the features of *DTAQ (journaling, saving content to disk, using remote queues...) and we only work with local queues, *USRQ is 4-5 times faster and consumes about the same times less CPU resources.