在这种情况下很难理解失败
Given FLOPS are the floating point operations per second, would that not be dependent on the power of the machine rather than the model and how many parameters it has? What am I missing here? Screenshot is from "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks". Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
一些硬件制造商指定 flops flops作为性能度量。另一方面,您可以计算模型的近似差值值,对于常规(不是深度)卷积层(根据 this ):
其中2适用于两种不同类型的指令:乘法和累积。
您需要记住,低脚拖值不会自动提供高性能。
Some hardware manufacturers specify FLOPS as the performance metric. On the other hand, you can calculate approximate FLOPs value for your model, for regular (not depthwise) convolutional layer it would be (according to this):
where 2 is for two different types of instructions: multiplying and accumulating.
You need to keep in mind that low FLOPS value do not automatically provide high performance.