续 - 车牌检测

发布于 2024-10-12 18:42:59 字数 598 浏览 5 评论 0原文

继续此线程:

什么是车辆牌照检测的好算法?

我已经开发了图像处理技术来尽可能强调车牌,总的来说我对此很满意,这里有两个示例。

替代文本

替代文本

现在到了最困难的部分,实际上是检测车牌。我知道有一些边缘检测方法,但我的数学很差,所以我无法将一些复杂的公式翻译成代码。

到目前为止,我的想法是循环遍历图像中的每个像素(基于图像宽度和高度的循环)由此将每个像素与颜色列表进行比较,由此检查算法以查看颜色之间是否保持区分车牌为白色,文字为黑色。如果确实如此,这些像素将被构建到内存中的新位图中,一旦停止检测到该模式,就会执行 OCR 扫描。

我很感激对此的一些意见,因为这可能是一个有缺陷的想法,太慢或太密集。

谢谢

Continuing from this thread:

What are good algorithms for vehicle license plate detection?

I've developed my image manipulation techniques to emphasise the license plate as much as possible, and overall I'm happy with it, here are two samples.

alt text

alt text

Now comes the most difficult part, actually detecting the license plate. I know there are a few edge detection methods, but my maths is quite poor so I'm unable to translate some of the complex formulas into code.

My idea so far is to loop through every pixel within the image (for loop based on img width & height) From this compare each pixel against a list of colours, from this an algorithm is checked to see if the colors keep differentiating between the license plate white, and the black of the text. If this happens to be true these pixels are built into a new bitmap within memory, then an OCR scan is performed once this pattern has stopped being detected.

I'd appreciate some input on this as it might be a flawed idea, too slow or intensive.

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

调妓 2024-10-19 18:42:59

您的“查看颜色是否保持车牌白色和文本黑色之间的差异”的方法基本上是搜索像素强度从黑色到白色多次变化的区域,反之亦然。边缘检测本质上可以完成同样的事情。然而,实现你自己的方法仍然是一个好主意,因为你会在这个过程中学到很多东西。哎呀,为什么不同时执行这两项操作并将您的方法的输出与某些现成的边缘检测算法的输出进行比较呢?

在某些时候,您会想要一个二值图像,例如黑色像素对应于“非字符”标签,白色像素对应于“是字符”标签。也许最简单的方法是使用阈值函数。但只有当角色已经以某种方式得到强调时,这才有效。

正如有人在您的其他线程中提到的,您可以使用黑帽运算符来做到这一点,这会导致类似这样的结果:

如果您使用 Otsu 方法(自动确定全局阈值水平)对上面的图像进行阈值处理,您会得到:

img src="https://i.sstatic.net/tMmuS.jpg" alt ="alt text">

有多种方法可以清理该图像。例如,您可以找到连接的组件,并丢弃那些太小、太大、太宽或太高而无法成为字符的组件:

alt text

由于图像中的字符相对较大且完全连接,因此此方法效果很好。

接下来,您可以根据邻居的属性过滤剩余的组件,直到获得所需数量的组件(= 字符数)。如果您想识别该字符,则可以计算每个字符的特征并将其输入到分类器,该分类器通常是通过监督学习构建的。

当然,上述所有步骤只是一种方法。

顺便说一句,我使用 OpenCV + Python 生成了上面的图像,这是计算机视觉的绝佳组合。

Your method of "see if the colors keep differentiating between the license plate white, and the black of the text" is basically searching for areas where the pixel intensity changes from black to white and vice-versa many times. Edge detection can accomplish essentially the same thing. However, implementing your own methods is still a good idea because you will learn a lot in the process. Heck, why not do both and compare the output of your method with that of some ready-made edge detection algorithm?

At some point you will want to have a binary image, say with black pixels corresponding to the "not-a-character" label, and white pixels corresponding to the "is-a-character" label. Perhaps the simplest way to do that is to use a thresholding function. But that will only work well if the characters have already been emphasized in some way.

As someone mentioned in your other thread, you can do that using the black hat operator, which results in something like this:

image after black hat operation

If you threshold the image above with, say, Otsu's method (which automatically determines a global threshold level), you get this:

alt text

There are several ways to clean that image. For instance, you can find the connected components and throw away those that are too small, too big, too wide or too tall to be a character:

alt text

Since the characters in your image are relatively large and fully connected this method works well.

Next, you could filter the remaining components based on the properties of the neighbors until you have the desired number of components (= number of characters). If you want to recognize the character, you could then calculate features for each character and input them to a classifier, which usually is built with supervised learning.

All the steps above are just one way to do it, of course.

By the way, I generated the images above using OpenCV + Python, which is a great combination for computer vision.

樱花坊 2024-10-19 18:42:59

颜色虽然看起来不错,但在阴影和光照条件下会带来相当大的挑战。实际上取决于您想让它变得强大多少,但现实世界的案例必须处理此类问题。

我对道路录像进行了研究(请参阅我的个人资料页面并在此处查找示例)我们发现现实世界的道路镜头在光照条件下非常嘈杂,并且您的颜色可以从棕色变为白色,以形成黄色的后车牌。

大多数算法使用线检测并尝试找到长宽比在可接受范围内的框。

我建议你对这个主题进行文献综述,但这是在 1993 年完成的(如果我没记错的话),所以将会有数千篇文章。

这是一个相当科学的领域,因此仅靠算法无法解决它,您将需要大量的前/后处理步骤。

简而言之,我的建议是使用霍夫变换来查找直线,然后尝试查找可以创建可接受的纵横比的矩形。

哈里斯特征检测可以提供重要的边缘,但如果汽车是浅色的,这将不起作用。

Colour, as much as looks good, will present quite some challenges with shading and light conditions. Depends really how much you want to make it robust but real world cases have to deal with such issues.

I have done research on road footage (see my profile page and look here for sample) and have found that the real-world road footage is extremely noisy in terms of light conditions and your colours can change from Brown to White for a yellow back-number-plate.

Most algorithms use line detection and try to find a box with an aspect ratio within an acceptable range.

I suggest you do a literature review on the subject but this was achieved back in 1993 (if I remember correctly) so there will be thousands of articles.

This is quite a scientific domain so just an algorithm will not solve it and you will needs numerous pre/post processing steps.

In brief, my suggestion is to use Hough transform to find lines and then try to look for rectangles that could create acceptable aspect ratio.

Harris feature detection could provide important edges but if the car is light-coloured this will not work.

余罪 2024-10-19 18:42:59

如果你有很多样本,你可以尝试检查 Paul Viola 和 Michael Jones 开发的人脸检测方法。它对于人脸检测很有用,也许它可以很好地用于车牌检测(特别是与其他方法结合使用时)

If you have a lot of samples, you could try to check face detection method developed by Paul Viola and Michael Jones. It's good for face detection, maybe it'll do fine with license plate detection (especially if combined with some other method)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文