嵌入式平台上线程类内存分配的奇怪现象
我遇到了一个奇怪的问题,我已经能够找到一些问题,但我仍然看不到原因。也许这里有人可以透露一些信息?
我在 VxWorks 5.5 之上的 PowerPC 处理器上运行,使用 PPCgnu604 工具链用 C++ 进行开发。
我有一个像这样的类:
class MyClass
{
public:
void run( void );
private:
CommandMesssageClass command;
StatusMessageClass status;
};
当我的应用程序启动时,它将动态分配 MyClass 的实例并生成一个指向其“运行”函数的线程。本质上,它只是坐在那里轮询命令,并在收到命令后返回状态。
请注意,这是该类的精简版本。为了简洁起见,还省略了许多其他方法和变量。
我看到的问题是,当命令和状态消息都定义为私有类成员时,尽管事实上不应该有动态内存分配,但内存中的可用字节会发生变化。这很重要,因为这是在需要确定性和速率安全的过程中发生的。
如果我将一个或两个消息声明移到 run 函数中,它就可以正常工作,无需额外分配!
我对 C++ 声明和内存分配的理解一定缺少一些基本的东西。我的理解是,我动态实例化的类实例在创建时将在堆上完全分配(包括所有成员变量)。我在这里看到的区别在于,将消息声明移动到 run 函数会将它们放在堆栈上。本例中的堆足够大,足以容纳整个类的大小。为什么在使用特定部分之前似乎没有分配足够的内存?
消息类别本身不进行动态分配。 (如果他们这样做了,我希望移动声明不会改变这种情况下的行为,并且我仍然会看到堆大小的变化。)
为了监视内存分配,我使用以下 VxWorks memLib (或memPartLib) 调用:
memPartInfoGet( memSysPartId, &partitionStatus );
...
bytesFree = partitionStatus.numBytesFree;
编辑:
为了澄清,MyClass 对象在初始化例程中实例化和初始化,然后代码进入速率安全处理。在此期间,一旦通过串行线路接收到命令消息(与命令或状态消息对象的第一次交互),就会分配额外的内存(或者更确切地说,可用字节数减少)。这很糟糕,因为动态内存分配不是确定性的。
我已经能够通过按照我所描述的方式移动类变量来解决这个问题。
I am running into a strange issue I've been able to track down somewhat but I still can't see the cause. Maybe someone here can shed some light?
I'm running on a PowerPC processor on top of VxWorks 5.5 developing in C++ with the PPCgnu604 toolchain.
I have a class like so:
class MyClass
{
public:
void run( void );
private:
CommandMesssageClass command;
StatusMessageClass status;
};
When my application is started, it will dynamically allocate an instance of MyClass and spawn a thread pointing to its "run" function. Essentially it just sits there polling for commands and, upon receipt, will issue a status back.
Note that this is a chopped down version of the class. There are a number of other methods and variables left out for brevity.
The issue I see is when both the command and status messages are defined as private class members I will get a change in the available bytes in memory despite the fact there should be no dynamic memory allocation. This is important because this is ocurring in what needs to be a deterministic and rate-safe procedure.
If I move one or both of the message declarations into the run function, it works fine with no additional allocation!
I must be missing something fundamental in my understanding of C++ declarations and memory allocation. My understanding is that a class instance that I dynamically instansiate will be fully allocated on the heap (including all member variables) when it's created. The difference I see here would be that moving the message declarations to the run function puts them on the stack instead. The heap in this case is more than large enough to accompadate the entire size of the class. Why does it seem not to be allocating enough memory until specific portions are used?
The message classes do no dynamic allocation of their own. (And if they did, I would expect moving the declaration would not change the behavior in this case and I would still see a change in the size of the heap.)
To monitor the memory allocation I'm using the following VxWorks memLib (or memPartLib) call:
memPartInfoGet( memSysPartId, &partitionStatus );
...
bytesFree = partitionStatus.numBytesFree;
Edit:
To clarify, the MyClass object is instansiated and initialized in an initialization routine and then the code enters rate-safe processing. During this time, upon the receipt of a command message over a serial line (the first interaction with the Command or Status message objects) additional memory is allocated (or rather the number of bytes free decreases). This is bad because dynamic memory allocation is not deterministic.
I've been able to get rid of the problem by moving the class variables as I've described.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不这么认为。你上面所说的一切都是正确的——游戏程序员一直严重依赖这种行为。 :-)
为了简洁起见,你忽略了课程的内容。我有一些调试类似问题的经验,我最好的猜测是,实际上,库函数中的某个地方正在进行您不知道的运行时分配。
换句话说,两种情况下都存在运行时分配,但是 MyClass 的两种不同大小意味着 malloc 池的填充方式不同。您可以通过将对象移动到 run() 内的堆栈,但将 MyClass 填充到相同的大小来证明这一点。如果您仍然看到空闲内存下降,那么它与这些对象是在堆上还是在堆栈上无关……这是由于 MyClass 的大小而发生的次要影响。
请记住,malloc 是块状的——大多数实现不会为每次调用 malloc 进行一对一的分配。相反,它会过度分配内存并将其保留在池中,并在必要时增加这些池。
我不熟悉您的工具链,但嵌入式系统上意外小分配的典型嫌疑人包括 ctype 函数(区域设置)和日期/时间函数(时区)。
I don't think so. Everything you say that you expect above is correct -- game programmers rely heavily on this behavior all the time. :-)
You've left out the guts of the class for brevity. I've had some experience debugging similar issues, and my best guess is that somewhere in there a library function is, in fact, making a runtime allocation that you don't know about.
In other words, the runtime allocation is there in both cases, but the two different sizes of MyClass mean that the malloc pools are filled differently. You could prove this by moving the objects to the stack inside run(), but padding MyClass out to the same size. If you still see the free mem drop, then it has nothing to do with whether those objects are on the heap or the stack ... it's a secondary effect that's happening because of the size of MyClass.
Remember, malloc is chunky -- most implementations don't do one-to-one allocations for each call to malloc. Instead it over-allocates and keeps the memory around in a pool, and grows those pools when necessary.
I'm not familiar with your toolchain, but typical suspects for unexpected small allocations on embedded systems include ctype functions (locales), and date/time functions (time zone).