手部特征识别

发布于 2024-11-11 15:54:12 字数 176 浏览 4 评论 0原文

给定一张手的照片,我试图确定定位连接手指的手掌位置的最合适方法。 (即手掌上距离手掌中心最远的位置,基本上是在手指之间。)

我一直在考虑一些可能的编码方法,特别是主动形状建模。然而,主动形状建模似乎有点矫枉过正,因为我需要的只是找到这些关键点,而不是跟踪它们的运动。我想知道熟悉特征识别的人是否可以提出更合适的技术。谢谢。

Given a photograph of a hand, I'm trying to determine the most appropriate method for locating the positions of the palm connecting the fingers. (i.e the location on the palm furthest from the center of the palm, essentially between the fingers.)

I've been thinking about some of the possible ways of coding this, in particular active shape modelling. However, it seems like active shape modelling would be overkill since all I need is to locate those key points, not to track their movement. I was wondering if anybody familiar with feature identification could suggest a more appropriate technique. Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

染墨丶若流云 2024-11-18 15:54:12

下面是一些使用 pymorph 和 mahotas 编写的 python 示例代码。使用 opencv 重新创建应该相当简单。如果可能的话,我会选择不同的背景,远离肤色的背景会简化初始阈值处理。

import pymorph as m
import mahotas

def hsv_from_rgb(image):
    image = image/255.0 
    r, g, b = image[:,:,0], image[:,:,1], image[:,:,2]
    m, M = numpy.min(image[:,:,:3], 2), numpy.max(image[:,:,:3], 2)
    d = M - m

    # Chroma and Value
    c = d
    v = M

    # Hue
    h = numpy.select([c ==0, r == M, g == M, b == M], [0, ((g - b) / c) % 6, (2 + ((b - r) / c)), (4 + ((r - g) / c))], default=0) * 60

    # Saturation
    s = numpy.select([c == 0, c != 0], [0, c/v])

    return h, s, v

image = mahotas.imread('hand.jpg')

#downsample for speed
image = image[::10, ::10, :]

h, s, v = hsv_from_rgb(image)

# binary image from hue threshold
b1 = h<35

# close small holes
b2 = m.closerec(b1, m.sedisk(5))

# remove small speckle
b3 = m.openrec(b2, m.sedisk(5))

# locate space between fingers
b4 = m.closeth(b3, m.sedisk(10))

# remove speckle, artifacts from image frame
b5 = m.edgeoff(m.open(b4))

# find intersection of hand outline with 'web' between fingers
b6 = m.gradm(b3)*b5

# reduce intersection curves to single point (assuming roughly symmetric, this is near the center)
b7 = m.thin(m.dilate(b6),m.endpoints('homotopic'))

# overlay marker points on binary image
out = m.overlay(b3, m.dilate(b7, m.sedisk(3)))

mahotas.imsave('output.jpg', out)

在此处输入图像描述

Here is some sample code in python, using pymorph and mahotas. It should be fairly trivial to recreate with opencv. If possible I would choose a different background, something farther from skin tone would simplify the initial thresholding.

import pymorph as m
import mahotas

def hsv_from_rgb(image):
    image = image/255.0 
    r, g, b = image[:,:,0], image[:,:,1], image[:,:,2]
    m, M = numpy.min(image[:,:,:3], 2), numpy.max(image[:,:,:3], 2)
    d = M - m

    # Chroma and Value
    c = d
    v = M

    # Hue
    h = numpy.select([c ==0, r == M, g == M, b == M], [0, ((g - b) / c) % 6, (2 + ((b - r) / c)), (4 + ((r - g) / c))], default=0) * 60

    # Saturation
    s = numpy.select([c == 0, c != 0], [0, c/v])

    return h, s, v

image = mahotas.imread('hand.jpg')

#downsample for speed
image = image[::10, ::10, :]

h, s, v = hsv_from_rgb(image)

# binary image from hue threshold
b1 = h<35

# close small holes
b2 = m.closerec(b1, m.sedisk(5))

# remove small speckle
b3 = m.openrec(b2, m.sedisk(5))

# locate space between fingers
b4 = m.closeth(b3, m.sedisk(10))

# remove speckle, artifacts from image frame
b5 = m.edgeoff(m.open(b4))

# find intersection of hand outline with 'web' between fingers
b6 = m.gradm(b3)*b5

# reduce intersection curves to single point (assuming roughly symmetric, this is near the center)
b7 = m.thin(m.dilate(b6),m.endpoints('homotopic'))

# overlay marker points on binary image
out = m.overlay(b3, m.dilate(b7, m.sedisk(3)))

mahotas.imsave('output.jpg', out)

enter image description here

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文