在 Cortex-M3 上的中断处理程序之间传递参数

发布于 2024-08-31 08:12:03 字数 366 浏览 4 评论 0原文

我正在为 Cortex-M3 构建一个轻量级内核。

我想从高优先级中断调用一些代码以在较低优先级中断中运行并传递一些参数。

我不想使用队列将工作发布到优先级较低的中断。

我只有一个缓冲区和大小要传递给它。

在编程手册中,它说 SVC 中断处理程序是同步的,这可能意味着如果您从优先级低于 SVC 处理程序的中断调用它,它会立即被调用(这样做的结果是您可以将参数传递给它,就好像它是一个函数调用(有点像 MS-DOS 中的 BIOS 调用)。

我想用另一种方式来做:将参数从高优先级中断传递到低优先级中断(目前我通过将参数保留在内存中的固定位置来实现)。

做到这一点的最佳方法是什么(如果可能的话)?

谢谢,

I'm building a light kernel for a Cortex-M3.

From a high priority interrupt I'd like to invoke some code to run in a lower priority interrupt and pass some parameters along.

I don't want to use a queue to post work to the lower priority interrupt.

I just have a buffer and size to pass to it.

In the proramming manual it says that the SVC interrupt handler is synchronous which presumably means that if you invoke it from an interrupt that's a lower priority than SVC's handler it gets called immediately (the upshot of this being that you can pass parameters to it as though it were a function call (a little like the BIOS calls in MS-DOS)).

I'd like to do it the other way: passing parameters from a high priority interrupt to a lower priority one (at the moment I'm doing it by leaving the parameters in a fixed location in memory).

What's the best way to do this (if at all possible)?

Thanks,

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

自控 2024-09-07 08:12:03

我不熟悉 Cortex-M3 架构,但我确定您需要在共享内存上提供锁定机制。

较高优先级的中断可以随时中断较低优先级的处理(除非您专门将其与硬件同步,并且您保证这不会发生,但情况可能并非如此)

锁定机制可能像关键部分内的一位标志(禁用标志上的读取-修改-写入中断)以保证锁定标志上的原子交换。(即,如果较低优先级进程/中断正在访问/更新锁定标志,更高优先级的中断确实会进来并改变它。)该标志是读取和写入共享内存空间的同步机制,允许两个进程在访问共享资源时锁定另一个进程,而无需禁用中断延长时间。(我想如果共享内存访问足够快,您可以在直接访问共享内存时禁用中断)

I'm not familiar with the Cortex-M3 architecture, but I'm sure what you need to provide a locking mechanism on the shared memory.

The higher priority interrupt can interrupt the lower priority processing at any time (unless some how you are specifically synchronizing this with hardware and you are gaurenteed this won't happen, but this is probably not the case)

The locking mechanism maybe as simple as a one bit flag, within a critical section(disabling interrupts for the read-modify-write on the flag) to guarantee an atomic exchange on the locking flag.(i.e. the if the lower priority process/interrupt is accessing/updating the locking flag, the higher priority interrupt does come in and change it.) The flag is then the synchronization mechanism for reading and writing to the shared memory space, allowing for both processes to lock out the other while it is accessing the shared resource, without disabling interrupts for an extend time.(I guess if the shared memory access is quick enough, you could just disable interrupts while you access the share memory directly)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文