与可维护性相关,映射表的设计是否比大型 switch 语句更好?
我正在将文本文件从一组定义翻译为另一组定义,并通过编写一个小型解析器解决了这个问题。一旦我识别了一个符号,我就会得到一个 case 语句,该语句根据用户选择的输入选项来决定调用哪个翻译例程(这些代码在不同的机器上意味着不同的东西)。
我本质上是采用多种输入格式并将它们转换为单一输出格式,大约有 400 多个独特的符号。
问题是,随着这个项目从几个简单的翻译(每个翻译都在自己的头文件中)发展成为十几个或更多的输入格式,维护起来变得很麻烦。这些头文件中的每一个都包含一个生成相关输出的巨大 switch 语句。这一切都有效,但看起来确实很笨拙。
我是否可以通过为每个输入机器创建映射表(即包含输入和输出符号的二维数组)并使用将表作为输入的通用转换例程来解决可维护性问题?我应该考虑更好的设计吗?
I am translating text files from one set of definitions to another, and I solved the problem by writing a small parser. Once I've identified a symbol I end up at a case statement that decides which translation routine to call depending on what the user selected input option was (these are codes that mean different things on different machines).
I am essentially taking multiple input formats and converting them to a single output format, over about 400 unique symbols.
The problem is that as this project has grown from a couple of simple translations, each in its own header file, into a dozen or more input formats it is getting cumbersome to maintain. Each of these header files contains a monster switch statement that produces the relevant output. It all works but really seems clumsy.
Will I resolve the maintainability issue by creating mapping tables ( ie a 2d array containing input and output symbols ) for each input machine and using a common translation routine taking the tables as input? Is there a better design I should consider?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
哈希表类型结构肯定会更容易维护,但至少有一个权衡,即您的巨型 switch 语句几乎肯定会更快,因为任何像样的编译器都会将其优化为跳转表。但它(取决于实现)不应该明显变慢,除非您要进行 500 亿次查找或其他操作。也就是说,哈希表可以优化为与 switch 语句一样快。
底线:如果您不需要确保获得尽可能快的速度,那么我会使用哈希表。如果重要的话,请介绍一下。
您可能想查看gperf,它生成编译时完美的哈希表。
A hash-table type structure would definitely be easier to maintain but there is at least one tradeoff, namely that your giant switch statement would almost certainly be faster because any decent compiler would optimise it into a jump table. But it (depending on the implementation) it shouldn't be that noticably slower unless you're doing 50 billion lookups or something. That said, a hash table could be optimised to be just as fast as a switch statement.
Bottom line: if you don't need to be sure that you're getting every ounce of speed possible, then I'd go with a hash table. If it matters, profile.
You might want to check out gperf which generates compile-time perfect hash tables.
您可以使用宏来减少代码重复。例如
You can use macros to reduce code duplication. For example