图像处理手部特征识别

发布于 2024-12-08 17:27:13 字数 636 浏览 1 评论 0原文

我正在解决一个问题(在 C++/opencv 中),其中需要使用从肤色和上手特征提取的身份信息来区分 4 个用户。然而,肤色方法(YCrCb)的可靠性非常低,因为肤色之间没有太大差异。因此,我试图从手上提取更多特征,例如较暗的斑点等。为此,我计算了图像的拉普拉斯。结果:

http://imageshack.us/photo/my-images/818/afb1.jpg/
http://imageshack.us/photo/my-images/31/afb2i.jpg/
http://imageshack.us/photo/my-images/638/afb3.jpg/

前两张图像来自同一个人/手。第二张图片是另一个人的手。正如您所看到的,在前两张图像中可以看到一个清晰的亮点,它们代表手上的较暗点。我的想法是对小方块中的手部轮廓进行采样,并在其他图像中查找这些方块。之后,我们可以评估哪张图像与给定图像具有最多和最高的匹配度。

但是,我找不到一种算法来查找样本图像和另一张图像之间的匹配。我尝试了 cvMatchTemplate() 操作(http://dasl.mem.drexel.edu/~noahKuntz/openCVTut6.html#Step%202)和meanShift算法,但这两种技术的结果都非常糟糕。

有人可以给我一些建议吗?

I'm working on a problem (in C++/opencv) in which 4 users need to be distinguished from each other using identity information extracted from skin color and features of the upper hand. However, skin color method (in YCrCb) has a very low reliability because there is not much difference between skin tones. Therefore I'm trying to extract more features from the hands such as darker spots etc. To do this, I calculated the laplacian of the images. results:

http://imageshack.us/photo/my-images/818/afb1.jpg/
http://imageshack.us/photo/my-images/31/afb2i.jpg/
http://imageshack.us/photo/my-images/638/afb3.jpg/

The first two images are from the same hand/person. The second image is a hand from another person. As you can see, a clear bright spot is visible in the frist two images which represent darker spots of the hand. My idea was to sample the handcontour in small squares and to find for these squares in the other images. Afterwards, we can evaluate which image has the most and highest matches for a given image.

However, I cannot find an algorithm to find matches between a sample image and another image. I tried the cvMatchTemplate() operation (http://dasl.mem.drexel.edu/~noahKuntz/openCVTut6.html#Step%202) and meanShift algorithm but the results of both techniques were really bad.

Can someone give me some tips?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

还给你自由 2024-12-15 17:27:13

这是一个棘手的问题,因为手是一个非常灵活的物体。如果你先解决手部姿势估计问题,你可能会运气好。
这是一篇很好的论文,可以帮助您掌握研究领域:

基于视觉的手部姿势估计:回顾

OpenCV 实现的视频示例:

http://www.youtube.com/watch?v=uETHJQhK144

一旦您估算出手的姿势,那么您就有了隔离和比较每只手的相同区域(例如,指关节和手腕之间的区域)的基础。然后您可以开始应用通用图像匹配技术。应用特征脸示例(在您的情况下为“特征手”)可能是您最好的选择。特征脸是在计算机视觉入门课程中教授的,并且可以在线获取大量信息。

This is a tough problem since the hand is such a flexible object. You might have some luck if you solve the hand-pose-estimation problem first.
Here is a good paper to help you get a handle on the research space:

Vision-based hand pose estimation: a review

Video example with OpenCV implementation:

http://www.youtube.com/watch?v=uETHJQhK144

Once you have an estimate of the hand pose, then you have a basis for isolating and comparing the same region of each hand (just the area between knuckles and wrist, for example). Then you could start applying the generic image matching techniques. Applying the Eigenfaces example ("Eigenhands" in your case) might be your best bet. Eigenfaces is taught in beginning computer vision courses and tons of info is available online.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文