Tensorflow Lite对象检测,Android Studio,SSD移动电视V2,相同的结构不同的Tflite文件,但几乎0检测
I want to make object detection application base on this github https://github.com/bendahouwael /车辆 - 探测器app-android 。
该GitHub代码使用基于SSD Mobilenet V1
的TFLITE。因此,我根据SSD Mobilenet V2
制作了自定义模型。我遵循此链接制作自己的Tflite模型。
来自 https://netron.app/ 我检查了模型结构几乎相同。请参阅下面的图片。
第一张图片是关于SSD Mobilenet V1
结构的。
第二张图片是基于SSD Mobilenet V2
的我自己的自定义模型。
我认为这两个模型的结构都是相同的。因此,我刚刚用标签TXT文件将自己的模型粘贴到App Code(资产文件夹)中。
该应用程序很好地显示了其实时图像,但没有检测到我决定要检测到的对象。我知道ssd mobilenet v1
类型是unit8
和我自己的模型(基于ssd mobilenet v2
)类型是float32 。但这不是一个问题,我猜它的代码中的b/c都设置了有关是否量化。
因此,请谁有任何想法,告诉我我的应用程序如此糟糕的原因。
PS1)我忘了谈论调试。它没有显示任何错误消息。这让我很难工作
I want to make object detection application base on this github https://github.com/bendahouwael/Vehicle-Detection-App-Android.
That github code uses tflite based on ssd mobilenet v1
. So I made my custom model based on ssd mobilenet v2
. I followed this link https://colab.research.google.com/drive/1qXn9q6m5ug7EWJsJov6mHaotHhCUY-wG?usp=sharing to make my own TFLITE model.
From https://netron.app/ I checked the model structure both almost same. Please see the pictures below.
First picture is about SSD MOBILENET V1
Structure.
Second picture is about my own custom model based on SSD MOBILENET V2
.
I think both models' structure is same. So I just pasted my own model into app code(to asset folder) with label txt file.
The application showed its real time image well but did not detect the objects that I decided what to detect. I know ssd mobilenet V1
type is unit8
and my own model (which is based on ssd mobilenet v2
) type is float32
. But this is not a problem I guess b/c in the code it has setting about quantized or not.
So please who has any ideas, tell me the reason why my application works so bad.
ps1) I forgot to say about debugging. It did not show any error messages. This makes me much hard to work
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果您仔细查看
输入
零件,mobilenet v1
有:类型:unit8 [1,300,300,1]
> Mobilenet v2
您有:类型:float [1,300,300,1]
这意味着第一个模型被量化(更多信息:在这里),为重量和偏见使用整数值。 (这是针对推理速度完成的)
现在,如果您转到tflite对象检测类(或可能命名为不同),通常您将拥有一种称为
stunceimage()
类似的方法(这是部分(这是部分)创建填充bytebuffer
):其中:
因此,在第一种情况下设置
ismodelQuantized = true
,对于Mobilenet V2,您设置了iSmodelQuantized = false = false
If you look closely at
INPUT
part,MobileNet V1
you have:type: unit8[1, 300, 300, 1]
MobileNet V2
you have:type: float[1, 300, 300, 1]
This means that the first model is quantized (more info: here) and for the weight and biases use integer values. (this is done for inference speed)
Now if you go to your TFlite Object Detection class (or maybe named different), usually you will have a method called
recognizeImage()
similar like this (this is the part when you create fill theByteBuffer
):where:
So in the first case set the
isModelQuantized = true
, and for MobileNet V2 you setisModelQuantized = false