眼动追踪:寻找瞳孔 (x,y)

发布于 2024-09-28 02:04:11 字数 622 浏览 5 评论 0原文

我正在寻找有关如何解决以下计算机视觉问题的一些建议。 以下是我正在使用的眼动追踪数据集的 4 个样本。我想编写代码获取一张这样的图像并计算瞳孔中心的 (x,y) 位置。我目前正在使用 MATLAB,但我也愿意使用其他软件。

有人可以推荐我可以用于此任务的方法吗?以下是我已经尝试过但效果不佳的一些方法。

  • 我尝试使用圆霍夫变换,但这需要我猜测瞳孔的半径,这有点问题。此外,由于扭曲,瞳孔并不总是完全是圆形,这可能会使这种方法变得更加困难。
  • 我尝试根据像素亮度对图像进行阈值处理,并使用regionprops MATLAB 函数来查找偏心率非常低(即尽可能圆形)的大约(例如)200 像素区域的区域。然而,这对阈值非常敏感,并且根据照明条件,眼睛的某些图像比其他图像更亮。 (请注意,下面的 4 个样本已经均值归一化,并且其中一个图像总体上比其他图像更亮,可能是因为某处存在一些非常暗的随机像素)

任何评论/建议将不胜感激!

编辑:感谢观星者的评论。理想情况下,算法应该能够确定瞳孔不在图像中,就像最后一个样本的情况一样。如果我暂时失去踪迹,也没什么大不了的。如果它给了我错误的答案,那就更糟糕了。

替代文字

I am looking for some suggestions on how to approach the following computer vision problem.
Below are 4 samples of an eye tracking dataset that I am working with. I would like to write code takes one such image and calculates the (x,y) position of the center of the pupil. I am currently using MATLAB, but I am open to using other software too.

Can someone recommend an approach I could use for this task? Here are some things I already tried but didn't work TOO well.

  • I tried to use circle hough transform, but that requires me to guess the radius of the pupil, which is a bit problematic. Also, due to distortions, the pupil is not always exactly a circle, which may make this approach harder still.
  • I tried thresholding the image based on pixel brightness and using regionprops MATLAB function to look for a region of roughly (say) 200 pixel area with very low eccentricity (i.e. as circular as possible). However, this is very sensitive to the threshold value, and some images of the eye are brighter than others based on the lighting conditions. (Note the 4 samples below are mean-normalized already, and still one of the images is brighter than others overall probably because of some very dark random pixel somewhere)

Any comments/suggestions would be appreciated!

EDIT: thanks for the comment Stargazer. The algorithm should ideally be able to determine that the pupil is not in the image, as is the case for the last sample. It's not a big deal if I lose track of it for a while. It's much worse if it gives me wrong answer though.

alt text

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

一场春暖 2024-10-05 02:04:11

我不确定这是否可以帮助您,因为您正在使用数据集,并且我不知道您更改捕获设备的灵活性/需求。以防万一,我们走吧。

Morimoto 等人使用了很好的相机技巧。他们创建了一个带有两组红外 LED 的相机。第一组放置在相机镜头附近。第二个放在远离镜头的地方。使用不同的频率,两组 LED 在不同的时刻打开。

视网膜会反射来自相机镜头附近的设备的光线(这与摄影中的红眼问题相同),产生明亮的瞳孔。另一组 LED 将产生暗瞳比较结果。因此,两个图像之间的简单差异即可为您提供近乎完美的瞳孔。看看 Morimoto 等人探索闪烁的方式(很高兴接近视线方向)。

I'm not sure if this can help you, because you are using a dataset and I don't know your flexibility/needs to change the capture device. Just in case, let's go.

Morimoto et al. use a nice camera trick. They created a camera with two sets of infra-red leds. The first set is put near the camera lenses. The second one is put far from the lenses. Using different frequencies, the two leds sets are turned on in different moments.

Retina will reflect the light from the set near the camera lenses (that is the same thing about the red eye problem in photography), producing a bright pupil. The other set of leds will produce a dark pupil. Compare the results. So, simple difference between the two images give you a near perfect pupil. Take a look in the way that Morimoto et al. explore the glint (nice to approach sight direction).

笑咖 2024-10-05 02:04:11

使用OpenCV集成Python。 。 。对于初学者来说,使用 OpenCV 会非常容易。

程序 :

* 如果您使用的是普通网络摄像头

1.首先用VideoCapture函数处理帧

2. 将其转换为灰度图像。

3. 使用 cv2.Canny() 函数查找 Canny Edges

4.应用HoughCircles函数。它将找到图像中的圆圈以及图像的中心。

5. 使用 HoughCircles 的结果参数绘制围绕瞳孔的圆。就是这样。

Use OpenCV integrated Python . . . It will be very easy for the beginners to work on OpenCV.

Procedure :

* If you are using normal webcam

1. First process the frame with VideoCapture function

2. Convert it into Gray Scale Image.

3. Find Canny Edges using cv2.Canny() function

4. Apply HoughCircles function. It will find the circles in the image as well as center of the image.

5. Use the resulting parameters of HoughCirlces to draw the circle around the pupil. Thats it.

黯然#的苍凉 2024-10-05 02:04:11

OpenCV 与 Python、C、C++、Java 等将是实现这一目标的好工具。这里有一个Python教程: http://docs.opencv .org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html,但肯定还有其他针对其他受支持语言的教程。 OpenCv 有许多开箱即用的 Haar Cascade,其中包括一个用于眼睛检测的 Haar Cascade。如果您确实想使用 HoughCircleTransform 实现解决方案,OpenCv 也有相应的函数。

OpenCV with Python, C, C++, Java and others would be a good tool for doing that. There is a tutorial for Python here: http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html, but there are definitely other tutorials out there for the other supported languages. OpenCv has a number of Haar Cascades right out of the box, one for eye detection included. If you actually wanted to implement a solution using HoughCircleTransform, OpenCv has the appropriate function for that, too.

愿与i 2024-10-05 02:04:11
import java.awt.Robot;%Add package or class to current import listimport java.awt.event.*;robot = Robot();objvideoinput('winvideo',2);%to set the device ID and supported format set(obj, 'FramesPerTrigger', Inf);% trigger infinite set(obj, 'ReturnedColorspace', 'rgb')%video in RGB format obj.FrameGrabInterval = 5;%the object acquires every %5th frame from the video stream start(obj)% to start the vedio time=0;NumberOfFrames=while(true)data=getsnapshot(obj);image(data);filas=size(data,1);columnas=size(data,2);% Centercentro_fila=round(filas/2);centro_columna=round(columnas/2);figure(1);if size(data,3)==3data=rgb2gray(data);% Extract edges.BW = edge(data,'canny')[H,T,R] = hough(BW,'RhoResolution',0.5,'Theta',-90:0.5:89.5);endsubplot(212)piel=~im2bw(data,0.19);piel=bwmorph(piel,'close');piel=bwmorph(piel,'open');piel=bwareaopen(piel,275);piel=imfill(piel,'holes');imagesc(piel);% Tagged objects in BW imageL=bwlabel(piel);% Get areas and tracking rectangleout_a=regionprops(L);% Count the number of objectsN=size(out_a,1);if N < 1 || isempty(out_a) % Returns if no object in the imagesolo_cara=[ ];continue end % Select larger area areas=[out_a.Area];[area_max pam]=max(areas);subplot(211)imagesc(data);colormap grayhold on rectangle('Position',out_a(pam).BoundingBox,'EdgeColor',[1 0 0],...'Curvature', [1,1],'LineWidth',2)centro=round(out_a(pam).Centroid);X=centro(1);Y=centro(2);robot.mouseMove(X,Y);text(X+10,Y,['(',num2str(X),',',num2str(Y),')'],'Color',[1 1 1])if X<centro_columna && Y<centro_fila 
title('Top left')elseif X>centro_columna && Y<centro_fila
title('Top right')elseif X<centro_columna && Y>centro_fila
title('Bottom left')else
title('Bottom right')
import java.awt.Robot;%Add package or class to current import listimport java.awt.event.*;robot = Robot();objvideoinput('winvideo',2);%to set the device ID and supported format set(obj, 'FramesPerTrigger', Inf);% trigger infinite set(obj, 'ReturnedColorspace', 'rgb')%video in RGB format obj.FrameGrabInterval = 5;%the object acquires every %5th frame from the video stream start(obj)% to start the vedio time=0;NumberOfFrames=while(true)data=getsnapshot(obj);image(data);filas=size(data,1);columnas=size(data,2);% Centercentro_fila=round(filas/2);centro_columna=round(columnas/2);figure(1);if size(data,3)==3data=rgb2gray(data);% Extract edges.BW = edge(data,'canny')[H,T,R] = hough(BW,'RhoResolution',0.5,'Theta',-90:0.5:89.5);endsubplot(212)piel=~im2bw(data,0.19);piel=bwmorph(piel,'close');piel=bwmorph(piel,'open');piel=bwareaopen(piel,275);piel=imfill(piel,'holes');imagesc(piel);% Tagged objects in BW imageL=bwlabel(piel);% Get areas and tracking rectangleout_a=regionprops(L);% Count the number of objectsN=size(out_a,1);if N < 1 || isempty(out_a) % Returns if no object in the imagesolo_cara=[ ];continue end % Select larger area areas=[out_a.Area];[area_max pam]=max(areas);subplot(211)imagesc(data);colormap grayhold on rectangle('Position',out_a(pam).BoundingBox,'EdgeColor',[1 0 0],...'Curvature', [1,1],'LineWidth',2)centro=round(out_a(pam).Centroid);X=centro(1);Y=centro(2);robot.mouseMove(X,Y);text(X+10,Y,['(',num2str(X),',',num2str(Y),')'],'Color',[1 1 1])if X<centro_columna && Y<centro_fila 
title('Top left')elseif X>centro_columna && Y<centro_fila
title('Top right')elseif X<centro_columna && Y>centro_fila
title('Bottom left')else
title('Bottom right')
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文