通过转移学习运行TF集线器模型SFaster?
我有一个项目,我想从图像中检测到人类。我已经实施了代码,以便使用预先训练的TF集线器型号,该模型可在以下方面可用: https://github.com/tensorflow/models/blob/master/master/research/object/object_detection/g3doc/tf2_detection_zoo.md 。这些模型已经能够识别人类,但它们也识别许多其他物体(在可可数据集上进行了培训)。我想知道我是否删除了这些预训练的模型的头部,只是训练它们以认识到人类的运行速度比开箱即用的模型要快得多?感谢您的帮助。
I have a project where I want to detect humans from images. I have implemented the code so that I use pre-trained TF Hub Models that are available at: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md. These models are already capable of recognising humans but they also recognize many other objects too (trained on COCO dataset). I am wondering if I remove the head of these pre-trained models and just train them to recognize humans would it be run considerably faster than it does by using these models out of the box ? Thanks for your help.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
转移学习几乎总是有帮助的。在这种情况下,随着网络的图层的进度,所学功能的复杂性将增加。
因此,您可以随意删除最后的FC层并将其制成。您甚至可以删除最后几层并添加自己的,这也可以通过帮助您的模型更快地收敛来给您带来几乎等效的提升。
Transfer learning nearly always helps. In this case, the complexity of features learned would increase as the network's layer goes deeper.
Therefore, you can feel free to remove the last FC layer and make it your own. You can even remove the last several layers and add your own, and that could also give you a nearly equivalent boost, by helping your model converges faster.