微控制器 + Verilog/VHDL 模拟器?

发布于 2024-07-11 06:20:17 字数 374 浏览 9 评论 0原文

多年来,我参与了许多基于微控制器的项目; 主要使用 Microchip 的 PIC。 我使用过各种微控制器模拟器,虽然它们有时非常有帮助,但我经常发现自己感到沮丧。 在现实生活中,微控制器永远不会单独存在,固件的行为取决于环境。 然而,我使用过的模拟人生都没有为微控制器之外的任何东西提供适当的支持。

我的第一个想法是用 Verilog 对整个板进行建模。 但是,我不想创建完整的 CPU 模型,而且我也没有很幸运地找到我使用的芯片的现有模型。 无论如何,我确实不需要或不想在该详细级别模拟过程,并且我想保留常规处理器模拟提供的调试设施。

在我看来,理想的解决方案是混合模拟器,将传统处理器模拟器与 Verilog 模型连接起来。

这样的事情存在吗?

Over the years I've worked on a number of microcontroller-based projects; mostly with Microchip's PICs. I've used various microcontroller simulators, and while they can be very helpful at times, I often find myself frustrated. In real life microcontrollers never exist alone and the firmware's behavior is dependent on the environment. However, none of the sims I've used provide decent support for anything outside the microcontroller.

My first thought was to model the entire board in Verilog. But, I'd rather not create an entire CPU model, and I haven't had much luck finding existing models for the chips I use. Regardless, I really don't need, or want, to simulate the proc at that level of detail, and I'd like to retain the debugging facilities provided by a regular processor sim.

It seems to me that the ideal solution would be a hybrid simulator that interfaces a traditional processor simulator with a Verilog model.

Does such a thing exist?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

鸠魁 2024-07-18 06:20:17

我使用了嵌入 FPGA 的 Altera Nios II 处理器。 Altera 提供了一个工具链,用于在模拟器中模拟 CPU(及其软件)以及您的自定义逻辑。 我想可以通过下载 CPU 的 VHDL/Verilog 核心来实现类似的设置(你尝试过 opencores 吗?他们那里有很多东西)。

但请记住,它的速度会慢得令人难以置信,所以不要指望以这种方式模拟整个复杂的过程。 您所能期望的最好结果就是模拟精细的软件-硬件交互点来调试问题。 如果您需要更深入的仿真,请考虑在具有内置监控代码的 FPGA 上运行。

I've used the Altera Nios II processor embedded on a FPGA. Altera provides a toolchain for simulating the CPU (with its software) together with your custom logic in a simulator. I suppose that a similar setup can be achieved by downloading a VHDL/Verilog core of your CPU (Did you try opencores ? They have lots of stuff there).

But keep in mind that it is going to be mind-bogglingly slow, so don't expect to simulate whole complex processes this way. The best you can hope for is simulating fine software-hardware interaction points to debug problems. If you need a deeper simulation, consider running it on a FPGA with built-in monitoring code.

许仙没带伞 2024-07-18 06:20:17

对于“模拟整个板”的方法,
Free Model Foundry 拥有大量模型,其中一些模型采用 VHDL,另一些模型采用 Verilog,现已提供。但您需要付费才能创建新模型。 这些对于确保电路板正确构建非常有帮助。

但我认为调试 PIC 时更常见的方法是构建一块板,然后处理固件。 在芯片世界中,(固件运行在尚未进入晶圆厂的芯片中的微处理器上)人们经常求助于非常昂贵的系统(或租用它们的时间),这些系统允许将部分设计编译到模拟器中而设计的其余部分则在正常的模拟器环境中运行。 如果没有芯片上昂贵的掩模组的障碍,对于电路板来说成本是不合理的。 虽然我听说过 Simulink (Mathworks) 与 FPGA 的一些创造性应用,但我的记忆是,要么在计算机上运行系统,要么对设备进行编程并实时运行同样的事情。

我相信 Cadence(询问 Palladium)和 Mentor Graphics 都有这种集成解决方案,如果你有钱的话。

For the "simulate the whole board" approach,
The Free Model Foundry has a large number of models, some in VHDL others in Verilog, that are available now.. but you'll need to pay to have new models created. These are very helpful in being sure the board is built correctly.

But I think the more common approach when debugging your PIC is to just build a board, then work on the firmware. In the chip world, (where the firmware is running on a microprocessor in a chip that hasn't gone to fab yet) people often resort to very expensive systems (or renting time on them) that allow compiling part of the design into an emulator while the rest of the design runs in the normal simulator environment. Without the barrier of an expensive mask set for the chip, the cost is just not justifiable for a Circuit board. Although I've heard of some creative applications of Simulink (Mathworks) with FPGA's, but my recollection is that one either ran the system on the computer, or programmed the device and ran the same thing in realtime.

I believe both Cadence (ask about Palladium) and Mentor Graphics have that integrated solution if you have the money to spend on it.

浅语花开 2024-07-18 06:20:17

我最近所做的是在模拟环境和主机系统之间创建一个接口。 不同的 hdl 模拟器有不同的界面,让模拟器不以批处理模式(传统的模拟模型)思考,而是像真实设计一样永远运行,这就是问题的一半。

然后,您可以从使用 C(或其他语言)的主机创建抽象,这些抽象可能允许也可能不允许您为任何目标编写应用程序软件(取决于您拥有的语言和编译器功能)。 例如,您可以创建一个通用的 poke 和 peek 函数,并在最终目标上拥有那些实际的 poke 和 peek 内存或 I/O,但是为了通过抽象进行模拟,您可以在模拟中与模拟相同内存周期的测试平台进行对话。

我更进一步,在主机和测试台之间使用了(伯克利)套接字,以便在主机应用程序停止和启动时模拟可以继续运行。 就像拥有一个带有操作系统的真实处理器一样,您可以启动应用程序并运行它们直至完成,然后启动另一个应用程序。 至少对于测试应用程序来说,为了交付,您可能只有一个应用程序。

通过创建这些抽象层,我可以编写真正的应用程序,这些应用程序将在构建目标时在目标上使用。 在此过程中,您最初可以使用逻辑的软件模拟,然后如果您喜欢构建具有抽象接口(丢弃逻辑)的 fpga,例如 uart。 用 uart 接口或其他接口替换应用程序抽象层和模拟器之间的垫片。 然后,当您将处理器和逻辑结合在同一芯片或同一板上时,再次替换抽象层,直接调用它们一直在通信的任何接口。 如果出现问题并且您保留了抽象层,您可以将应用程序带回到模拟模型并可以访问所有逻辑内部结构。

具体来说,这次我使用的是 sourceforge 上的 hdl 语言循环 cdl,文档需要一些帮助,但示例可能会让您继续前进,并且它会生成可综合的 verilog,因此您在那里获得了额外的胜利。 除了连接和启动 C 仿真模型所需的最低限度之外,我扔掉了所有脚本批处理内容。 所以我的测试平台是用 C 语言(技术上是 C++)编写的,套接字层是在那里完成的。 输出可以是 gtkwave 使用的 .vcd 文件。 基本上,您可以使用无需许可证的开源软件等来完成大部分 HDL 设计。通过向 CDL 模拟部分添加一两行代码,我可以让它作为无限循环运行,我可以说这是可行的很好,似乎没有任何内存泄漏等。modelsim

和 cadence 都有将主机 C 程序连接到模拟世界的标准化方法,从那里您可以使用 IPC 来访问与抽象层 API 对话的主机应用程序。

对于图片来说这可能有点过分了,我不久前已经放弃了图片,转而选择更快且 C 语言友好的基于 Arm 的微控制器。 有一个/曾经有一个开放核心图片,您可以简单地将其合并到您的模拟中,即使这不是您在这里想要做的。

What I have done recently is create an interface between the simulation environment and host system. Different hdl simulators have different interfaces, and getting the simulator NOT think in batch mode, the traditional simulation model, instead run for ever like a real design is half of the problem.

Then from the host using C (or whatever) you can create abstractions that may or may not allow you to write your application software for whatever target (depending on what language and compiler capabilities you have). For example you can make a generic poke and peek function and on the final target have those actually poke and peek memory or I/O, but for simulation through the abstraction you talk to a testbench in the simulation that simulates the same memory cycle.

I went one step further and used (Berkeley) sockets between the host and test bench so that the simulation can keep running while the host applications stop and start. Not unlike having a real processor with an OS that you are starting applications and running them to completion and starting another. At least for test applications, for delivery you probably only have one app.

By creating these abstraction layers I can write real applications that will be used on the target when it is built. Along the way you can use software simulation of the logic initially, then if you like build an fpga with an abstraction interface (throw away logic) say a uart for example. Replace the shim between the applications abstraction layer and the simulator with a uart interface, or whatever. Then when you marry the processor and logic in the same chip or on the same board, replace the abstraction layer again with direct calls to whatever interfaces they have always though they were talking to. If something breaks and you have retained the abstraction layer you can take the application back to the simulation model and have access to all of your logic internals.

Specifically this time around I am using a hdl language cyclicity cdl which is on sourceforge, the documentation needs some help but the examples may get you going, and it produces synthesizable verilog, so you get an extra win there. I threw out all the scripting batch stuff other than the bare minimum needed to connect and start a C simulation model. So my test bench is in C (well C++ technically) the sockets layer was done there. The output can be .vcd files which gtkwave uses. Basically you can do the bulk of your HDL design using open source software with no licenses, etc. By adding one or two lines of code to the CDL simulation part I was able to have it run as an infinite loop, which I can say works quite well, there doesnt appear to be any memory leaks, etc.

both modelsim and cadence have standardized ways of connecting host C programs to the simulation world and from there you can use an IPC to get to host applications talking to an abstraction layer api.

this is probably way overkill for a pic, I have given up pics a while ago for the faster and C friendly arm based micros anyway. There is/was an open core pic that you could simply incorporate into your simulation, even though that is not what you are trying to do here.

毅然前行 2024-07-18 06:20:17

不是我见过的。 最好的办法是正确定义 uC 和 FPGA 之间的接口和行为,然后定义一系列可以使用自动测试仪应用的测试波形。 您必须使用 FPGA 或 uC 制作自动测试仪(或者逻辑分析仪可能具有某些此类功能)(应用波形、监视中断、断点等)。 如果您确实想要,我知道 Opencores.org 有定义为 VHDL 的类似 PIC 和 AVR 的 8 位 uC 内核,因此您可以在 FPGA 上实现整个项目,然后进行调试。

Not that I've seen. Your best bet is to properly define the interfaces and behavior between the uC and FPGA and then define a series of test waveforms that can be applied using an automated tester. You would have to make the automated tester (or perhaps a logic analyzer may have some such functionality) out of an FPGA or uC (apply waveform, watch interrupts, breakpoints, etc). If you really want I know that Opencores.org has PIC and AVR-like 8-bit uC cores defined as VHDL, so you could implement your entire project on the FPGA and then just debug that.

許願樹丅啲祈禱 2024-07-18 06:20:17

通常不需要在 RTL 级别对 CPU 进行建模。 因为你并不真正关心它一点一点地做什么; 您通常关心它的作用,例如寄存器值、存储器和总线访问。

最简单的是调用总线功能模型。 这只是生成 CPU 所做的读取和写入操作,通常基于文本文件。 这些可用于某些 CPU 和许多流行的总线(例如 PCI、PCIe)。 这些模拟速度非常快。

下一步是功能周期精确模型。 这些模拟速度很快。 它们通常被加密。

最后是完整的 RTL 模型。 这些通常只有在您与 CPU 供应商密切合作时才可用,例如在 ASIC 中使用他们的内核。 通常这些都是加密的,除非您是一家大公司。

存储器模型通常是周期精确的(例如Micron)。

Generally there isn't need to model the CPU at the RTL level. Since you don't really care about what it does bit by bit; you generally care about what it does, e.g. register values, memories and bus access.

The simplest is call at Bus Functional Model. This just generates the read and writes that the CPU does, often based on a text file. These are available for some CPUs and many popular buses (e.g. PCI, PCIe). THese simulate super fast.

The next step up is a functional cycle-accurate model. Those simulate fast. They are often encrypted.

Last is a full RTL model. Those usually are only available if you are working closely with the CPU vendor, e.g. using their core in your ASIC. Typically these are encrypted, unless you are a huge company.

Memory models are typically cycle-accurate (e.g. Micron).

风流物 2024-07-18 06:20:17

我硬件部门的同事经常使用 FPGA 仿真软件来查找时序错误并追踪奇怪的行为。

模拟一两毫秒可能需要几个小时,因此将模拟器用于除非常小的事情之外的任何事情都是不可行的。

不过,您可能想看看 SystemC。 http://en.wikipedia.org/wiki/SystemC

My workmates from the hardware department use FPGA simulation software quite often to find timing-bugs and trace down strange behaviours.

Simulating one or two milliseconds can take several hours, so using the simulator for anything but very small things is not feasable.

You may want to have a look at SystemC though. http://en.wikipedia.org/wiki/SystemC

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文