Using Touch Events - Web APIs 编辑

Today, most Web content is designed for keyboard and mouse input. However, devices with touch screens (especially portable devices) are mainstream and Web applications can either directly process touch-based input by using Touch Events or the application can use interpreted mouse events for the application input. A disadvantage to using mouse events is that they do not support concurrent user input, whereas touch events support multiple simultaneous inputs (possibly at different locations on the touch surface), thus enhancing user experiences.

The touch events interfaces support application specific single and multi-touch interactions such as a two-finger gesture. A multi-touch interaction starts when a finger (or stylus) first touches the contact surface. Other fingers may subsequently touch the surface and optionally move across the touch surface. The interaction ends when the fingers are removed from the surface. During this interaction, an application receives touch events during the start, move, and end phases. The application may apply its own semantics to the touch inputs.

Interfaces

Touch events consist of three interfaces (Touch, TouchEvent and TouchList) and the following event types:

  • touchstart - fired when a touch point is placed on the touch surface.
  • touchmove - fired when a touch point is moved along the touch surface.
  • touchend - fired when a touch point is removed from the touch surface.
  • touchcancel - fired when a touch point has been disrupted in an implementation-specific manner (for example, too many touch points are created).

The Touch interface represents a single contact point on a touch-sensitive device. The contact point is typically referred to as a touch point or just a touch. A touch is usually generated by a finger or stylus on a touchscreen, pen or trackpad. A touch point's properties include a unique identifier, the touch point's target element as well as the X and Y coordinates of the touch point's position relative to the viewport, page, and screen.

The TouchList interface represents a list of contact points with a touch surface, one touch point per contact. Thus, if the user activated the touch surface with one finger, the list would contain one item, and if the user touched the surface with three fingers, the list length would be three.

The TouchEvent interface represents an event sent when the state of contacts with a touch-sensitive surface changes. The state changes are starting contact with a touch surface, moving a touch point while maintaining contact with the surface, releasing a touch point and canceling a touch event. This interface's attributes include the state of several modifier keys (for example the shift key) and the following touch lists:

  • touches - a list of all of the touch points currently on the screen.
  • targetTouches - a list of the touch points on the target DOM element.
  • changedTouches - a list of the touch points whose items depends on the associated event type:
    • For the touchstart event, it is a list of the touch points that became active with the current event.
    • For the touchmove event, it is a list of the touch points that have changed since the last event.
    • For the touchend event, it is a list of the touch points that have been removed from the surface (that is, the set of touch points corresponding to fingers no longer touching the surface).

Together, these interfaces define a relatively low-level set of features, yet they support many kinds of touch-based interaction, including the familiar multi-touch gestures such as multi-finger swipe, rotation, pinch and zoom.

From interfaces to gestures

An application may consider different factors when defining the semantics of a gesture. For instance, the distance a touch point traveled from its starting location to its location when the touch ended. Another potential factor is time; for example, the time elapsed between the touch's start and the touch's end, or the time lapse between two consecutive taps intended to create a double-tap gesture. The directionality of a swipe (for example left to right, right to left, etc.) is another factor to consider.

The touch list(s) an application uses depends on the semantics of the application's gestures. For example, if an application supports a single touch (tap) on one element, it would use the targetTouches list in the touchstart event handler to process the touch point in an application-specific manner. If an application supports two-finger swipe for any two touch points, it will use the changedTouches list in the touchmove event handler to determine if two touch points had moved and then implement the semantics of that gesture in an application-specific manner.

Browsers typically dispatch emulated mouse and click events when there is only a single active touch point. Multi-touch interactions involving two or more active touch points will usually only generate touch events. To prevent the emulated mouse events from being sent, use the preventDefault() method in the touch event handlers. For more information about the interaction between mouse and touch events, see Supporting both TouchEvent and MouseEvent.

Basic steps

This section contains a basic usage of using the above interfaces. See the Touch Events Overview for a more detailed example.

Register an event handler for each touch event type.

// Register touch event handlers
someElement.addEventListener('touchstart', process_touchstart, false);
someElement.addEventListener('touchmove', process_touchmove, false);
someElement.addEventListener('touchcancel', process_touchcancel, false);
someElement.addEventListener('touchend', process_touchend, false);

Process an event in an event handler, implementing the application's gesture semantics.

// touchstart handler
function process_touchstart(ev) {
  // Use the event's data to call out to the appropriate gesture handlers
  switch (ev.touches.length) {
    case 1: handle_one_touch(ev); break;
    case 2: handle_two_touches(ev); break;
    case 3: handle_three_touches(ev); break;
    default: gesture_not_supported(ev); break;
  }
}

Access the attributes of a touch point.

// Create touchstart handler
someElement.addEventListener('touchstart', function(ev) {
  // Iterate through the touch points that were activated
  // for this element and process each event 'target'
  for (var i=0; i < ev.targetTouches.length; i++) {
    process_target(ev.targetTouches[i].target);
  }
}, false);

Prevent the browser from processing emulated mouse events.

// touchmove handler
function process_touchmove(ev) {
  // Set call preventDefault()
  ev.preventDefault();
}

Best practices

Here are some best practices to consider when using touch events:

  • Minimize the amount of work that is done in the touch handlers.
  • Add the touch point handlers to the specific target element (rather than the entire document or nodes higher up in the document tree).
  • Add touchmove, touchend and touchcancel event handlers within the touchstart.
  • The target touch element or node should be large enough to accommodate a finger touch. If the target area is too small, touching it could result in firing other events for adjacent elements.

Implementation and deployment status

The touch events browser compatibility data indicates touch event support among mobile browsers is relatively broad, with desktop browser support lagging although additional implementations are in progress.

Some new features regarding a touch point's touch area - the area of contact between the user and the touch surface - are in the process of being standardized. The new features include the X and Y radius of the ellipse that most closely circumscribes a touch point's contact area with the touch surface. The touch point's rotation angle - the number of degrees of rotation to apply to the described ellipse to align with the contact area - is also be standardized as is the amount of pressure applied to a touch point.

What about Pointer Events?

The introduction of new input mechanisms results in increased application complexity to handle various input events, such as key events, mouse events, pen/stylus events, and touch events. To help address this problem, the Pointer Events standard defines events and related interfaces for handling hardware agnostic pointer input from devices including a mouse, pen, touchscreen, etc.. That is, the abstract pointer creates a unified input model that can represent a contact point for a finger, pen/stylus or mouse. See the Pointer Events MDN article.

The pointer event model can simplify an application's input processing since a pointer represents input from any input device. Additionally, the pointer event types are very similar to mouse event types (for example, pointerdown pointerup) thus code to handle pointer events closely matches mouse handling code.

The implementation status of pointer events in browsers is relatively high with Chrome, Firefox, IE11 and Edge having complete implementations.

Examples and demos

The following documents describe how to use touch events and include example code:

Touch event demonstrations:

Community

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据

词条统计

浏览:59 次

字数:14882

最后编辑:8年前

编辑次数:0 次

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文