NUI/触摸界面的一般注意事项

发布于 2024-12-05 08:22:49 字数 988 浏览 0 评论 0原文

在过去的几个月里,我一直在研究为各种软件音乐合成器开发基于 Kinect 的多点触控界面。

我提出的总体策略是通过编程或(如果可能)算法创建对象来表示软合成器的各种控制。这些应该有;

  • X 位置
  • Y 位置
  • 高度
  • 宽度
  • MIDI 输出通道
  • MIDI 数据缩放器(将 xy 坐标转换为 MIDI 值)

我考虑过的算法创建的 2 种策略是 XML 描述和以某种方式将内容从屏幕上拉出来(即给定一个正在运行的程序,找到 xy 坐标所有控件)。我不知道如何进行第二个,这就是为什么我用这种特定的技术语言来表达它;)。我可以做一些中间解决方案,例如使用鼠标单击控件的角来生成 xml 文件。我可以做的另一件事(我在 Flash 应用程序中经常看到的)是将屏幕尺寸放入变量中,并使用数学根据屏幕尺寸构建所有界面对象。请注意,并非绝对有必要使对象与屏幕控件大小相同,或代表所有屏幕对象(有些只是指示器,而不是交互式控件)

给定(目前)两组 X/Y 坐标作为输入(左手和右手),使用它们的最佳选择是什么?我的第一直觉是创建某种焦点测试,如果 x/y 坐标落在界面对象的边界内,则该对象将变为活动状态,然后如果它们在一段时间内落在其他较小边界之外,则该对象将变为非活动状态。我发现的廉价解决方案是使用左手作为指针/选择器,右手作为控制器,但似乎我可以做得更多。我有一些可以使用的手势解决方案(隐藏的马尔可夫链)。确切地说,并不是说他们很容易去工作,但如果有足够的激励,我可以看到自己正在做的事情。

因此,总而言之,问题是

  • 表示界面(这是必要的,因为默认界面总是需要鼠标输入)
  • 选择一个控件,
  • 使用两组 x/y 坐标(旋转/连续控制器)来操纵它,或者,在开关的情况下,最好是使用手势来切换它而不给予/获取焦点。

任何评论,尤其是来自曾经/正在使用多点触控 io/NUI 工作的人的评论,我们都将不胜感激。现有项目和/或一些好的阅读材料(书籍、网站等)的链接将会有很大帮助。

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.

The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;

  • X position
  • Y position
  • Height
  • Width
  • MIDI output channel
  • MIDI data scaler (convert x-y coords to midi values)

2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)

Other considerations;

Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.

So, to summarize, the problem is

  • represent the interface (necessary because the default interface always expects mouse input)
  • select a control
  • manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.

Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

夜司空 2024-12-12 08:22:49

哇这里有很多东西。我在 Microsoft 期间从事过很多 NUI 工作,所以让我们看看我们能做什么...

但首先,我需要消除这个烦恼:你说“基于 Kinect 的多点触控”。那是错误的。 Kinect 本质上与触摸无关(这就是为什么您面临“选择控件”挑战的原因)。触摸、身体跟踪和鼠标所需的 UI 考虑类型完全不同。例如,在触摸 UI 中,您必须非常小心地根据屏幕尺寸/分辨率/DPI 调整内容大小……无论屏幕如何,手指始终具有相同的物理尺寸,并且人们具有相同的物理精度,因此您希望您的按钮和类似控件的物理尺寸始终大致相同。研究发现 3/4 英寸是触摸屏按钮的最佳位置。不过,对于 Kinect 来说,这并不是什么大问题,因为您没有直接触摸任何东西 - 准确性不是由手指尺寸决定的,而是由传感器准确性和用户精确控制挑剔的能力决定的。滞后的虚拟光标。

如果您花时间玩 Kinect 游戏,很快就会发现有 4 种交互范例。
1)基于姿势的命令。用户敲击并保持姿势来调用某些应用程序范围或命令(通常会显示一个菜单)
2) 悬停按钮。用户将虚拟光标移动到按钮上并保持静止一段时间以选择该按钮
3)基于滑动的导航和选择。用户向一个方向挥动双手以滚动和列表,向另一方向挥动以从列表中选择
4)语音命令。用户只需说出命令。

业余爱好者也尝试过其他类似鼠标的想法(在实际游戏中没有见过这些),但坦率地说,它们很糟糕:1)用一只手作为光标,另一只手“单击”光标所在的位置或 2)使用 z - 手的坐标以确定是否“单击”

我不清楚您是否在询问如何使某些现有的鼠标小部件与 Kinect 一起使用。如果是这样,网络上有一些项目将向您展示如何使用 Kinect 输入控制鼠标,但这很蹩脚。这听起来可能超级酷,但你实际上根本没有利用该设备最擅长的功能。

如果我正在构建一个音乐合成器,我会专注于方法#3 - 滑动。像舞蹈中心之类的东西。屏幕左侧显示 MIDI 控制器列表,并通过一些小的视觉指示来指示其状态。让用户滑动左手滚动并从此列表中选择一个控制器。屏幕右侧显示您如何在用户身体前方的某个平面内跟踪用户的右手。现在,您可以让他们同时使用双手,立即给出每只手的解释方式的视觉反馈,并且不需要他们非常精确。

PS...我还想对 Josh Blake 即将出版的 NUI 书表示赞赏。这是好东西。如果您真的想掌握这个领域,请订购一本:) http://www.manning.com/blake /

Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...

But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.

If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.

There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"

It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.

If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.

ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文