CALayer 中的锚点

发布于 2024-11-15 17:44:10 字数 993 浏览 6 评论 0原文

查看Apple文档中的Touches示例,有这样的方法:

// scale and rotation transforms are applied relative to the layer's anchor point
// this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
    if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
        UIView *piece = gestureRecognizer.view;
        CGPoint locationInView = [gestureRecognizer locationInView:piece];
        CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];

        piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
        piece.center = locationInSuperview;
    }
}

第一个问题,有人可以解释一下在子视图中设置锚点并更改超级视图中心的逻辑(比如为什么要这样做)?

最后,anchorPoint 语句的数学运算是如何进行的?如果您有一个边界为 500、500 的视图,并说您用一根手指触摸 100、100,用另一根手指触摸 500、500。在此框中,您的正常锚点是 (250, 250)。现在是??? (不知道)

谢谢!

Looking at the Touches example from Apple's documentation, there is this method:

// scale and rotation transforms are applied relative to the layer's anchor point
// this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
    if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
        UIView *piece = gestureRecognizer.view;
        CGPoint locationInView = [gestureRecognizer locationInView:piece];
        CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];

        piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
        piece.center = locationInSuperview;
    }
}

First question, can someone explain the logic of setting the anchor point in the subview, and changing the center of the superview (like why this is done)?

Lastly, how does the math work for the anchorPoint statement? If you have a view that has a bounds of 500, 500, and say you touch at 100, 100 with one finger, 500, 500 with the other. In this box your normal anchor point is (250, 250). Now it's ???? (have no clue)

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

一抹微笑 2024-11-22 17:44:10

视图的 center 属性仅反映其支持层的 position 属性。令人惊讶的是,这意味着 center 不必位于 view 的中心。 position 位于其边界内的位置基于 anchorPoint,它接受 (0,0) 和 (1,1) 之间的任何值。将其视为位置是否位于其边界内的标准化指标。如果您要更改 anchorPointposition,边界将自行调整,而不是将位置移动到其 superlayer/超级视图。因此,要重新调整位置以使视图框架不会移动,可以操纵中心

piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);

想象一下原来的 O 是触摸点,

+++++++++++  
+ O       +         +++++++++++
+    X    +  -->    + X       +
+         +         +         +
+++++++++++         +         +
                    +++++++++++

现在我们希望这个 X 位于用户触摸的点。我们这样做是因为所有缩放和旋转都是基于位置/anchorPoint完成的。要将框架调整回原始位置,我们将视图的“center”设置为触摸位置。

piece.center = locationInSuperview;

因此,这反映在视图重新调整其框架回来时,

                    +++++++++++  
+++++++++++         + X       +
+ X       +  -->    +         +
+         +         +         +
+         +         +++++++++++
+++++++++++

现在当用户旋转或缩放时,会发生好像轴位于触摸点而不是视图的真正中心。

在您的示例中,视图的位置最终可能是平均值,即 (300, 300),这意味着 anchorPoint 将为 (0.6, 0.6),并响应 frame会向上移动。要重新调整,我们将中心移动到触摸位置,frame 会向下移动。

The center property of a view is a mere reflection of the position property of its backing layer. Surprisingly what this means is that the center need not be at the center of your view. Where position is situated within its bounds is based on the anchorPoint which takes in values anywhere between (0,0) and (1,1). Think of it as a normalized indicator of whether the position lies within its bounds. If you were to change either the anchorPoint or the position, the bounds will adjust itself rather than the position shifting w.r.t to its superlayer/superview. So to readjust position so that the frame of the view doesn't shift one can manipulate the center.

piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);

Imagine the original thing being where O is the touch point,

+++++++++++  
+ O       +         +++++++++++
+    X    +  -->    + X       +
+         +         +         +
+++++++++++         +         +
                    +++++++++++

Now we want this X to be at the point where the user has touched. We do this because all scaling and rotations are done based on the position/anchorPoint. To adjust the frame back to its original position, we set the "center" of the view to the touch location.

piece.center = locationInSuperview;

So this reflects in the view readjusting its frame back,

                    +++++++++++  
+++++++++++         + X       +
+ X       +  -->    +         +
+         +         +         +
+         +         +++++++++++
+++++++++++

Now when the user rotates or scales, it will happen as if the axis were at the touch point rather than the true center of the view.

In your example, the location of view might end up being the average i.e. (300, 300) which means the anchorPoint would be (0.6, 0.6) and in response the frame will move up. To readjust we move the center to the touch location will move the frame back down.

神经大条 2024-11-22 17:44:10

第一个问题,谁能解释一下
设置锚点的逻辑
在子视图中,并更改
超级视图的中心(就像为什么这样
完成了)?

此代码不会更改超级视图的中心。它将手势识别器视图的中心更改为手势的位置(在超级视图的框架中指定的坐标)。该语句只是在跟随手势位置的同时在其超级视图中移动视图。设置center可以被认为是设置frame的简写方式。

至于锚点,它会影响缩放和旋转变换应用于图层的方式。例如,图层将使用该锚点作为其旋转轴进行旋转。缩放时,所有点都会围绕锚点偏移,锚点本身不会移动。

最后,数学是如何运作的
锚点声明?如果你有一个
边界为 500、 500 的视图,
并说你用 1 触摸 100、100
手指500,其他500。在
这个框你的正常锚点是
(250、250)。现在是??? (没有
线索)

anchorPoint 属性上需要注意的关键概念是,无论图层的实际大小是多少,该点中的值的范围都被声明为从 [0, 1] 开始。因此,如果您有一个边界为 (500, 500) 的视图,并且在 (100, 100) 和 (500, 500) 处触摸两次,则整个手势在视图中的位置将为 (300, 300),锚点将为 (300/500, 300/500) = (0.6, 0.6)。

First question, can someone explain
the logic of setting the anchor point
in the subview, and changing the
center of the superview (like why this
is done)?

This code isn't changing the center of the superview. It's changing the center of the gesture recognizer's view to be the location of the gesture (coordinates specified in the superview's frame). That statement is simply moving the view around in its superview while following the location of the gesture. Setting center can be thought of as a shorthand way of setting frame.

As for the anchor point, it affects how scale and rotation transforms are applied to the layer. For example, a layer will rotate using that anchor point as its axis of rotation. When scaling, all points are offset around the anchor point, which doesn't move itself.

Lastly, how does the math work for the
anchorPoint statement? If you have a
view that has a bounds of 500, 500,
and say you touch at 100, 100 with one
finger, 500, 500 with the other. In
this box your normal anchor point is
(250, 250). Now it's ???? (have no
clue)

The key concept to note on the anchorPoint property is that the range of the values in the point is declared to be from [0, 1], no matter what that actual size of the layer is. So, if you have a view with bounds (500, 500) and you touch twice at (100, 100) and (500, 500), the location in the view of the gesture as a whole will be (300, 300), and the anchor point will be (300/500, 300/500) = (0.6, 0.6).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文