UITextInput 是否缺少选择处理机制?
如果您在自定义视图上实现 UITextInput,并使用 CoreText 渲染文本,您就可以绘制自己的光标和选择/标记,并使其完全与硬件键盘配合使用。如果您切换到日语输入,那么您会看到标记,但有一些奇怪的事情:如果您长按标记,您将获得矩形系统放大镜和选择处理,而无需自己处理触摸。
我不明白为什么我们必须为选择实现我们自己的触摸处理,绘制我们自己的放大镜等。它用于标记!那么我需要做什么才能将标准手势识别器添加到我的自定义视图中呢?
开发网站上的一个示例仅评论说用户选择超出了示例的范围。这表明您确实必须自己做。
我认为所有开发自己的富文本编辑器类的开发人员继续编写自己的选择处理代码并不符合苹果的利益,更不用说自定义绘制圆形和矩形放大镜了?!当然,您可以尝试对其进行逆向工程,使其非常接近,但如果选择机制略有不同,可能会给用户一种奇怪的感觉。
我发现开发人员分为两类:
1)用大量的 JavaScript 代码强暴 UIWebView,使其成为编辑器 2)煞费苦心地实现选择机制和放大镜绘制本身
那么这里的解决方案是什么?继续提交雷达,直到苹果添加这个缺失的部分?或者这实际上已经存在(正如我遇到的上述工程师所声称的那样),而我们太愚蠢了,无法找到如何利用它,而是求助于手动完成所有操作(除了标记文本)?
即使是 Omnifocus 的聪明人似乎也认为手动方法是唯一有效的方法。这让我很伤心,你得到了如此出色的协议,但如果你实施它,你会发现它严重受损。甚至可能是故意的?
If you implement UITextInput on your custom view and - say - use CoreText to render the text you get to a point where you can draw your own cursor and selection/marking and have that fully working with the hardware keyboard. If you switch to Japanese input then you see the marking, but there's something curious: if you long press into the marking you get the rectangular system loupe and selection handling without having to deal with the touches yourself.
What I don't get why we would have to implement our own touch handling for the selection, draw our own loupes etc. It's working for marking! So what do I have to do to get the standard gesture recognizers added to my custom view as well?
the one sample on the dev site only has a comment about that user selection would be outside the scope of the sample. Which would indicate that indeed you have to do it yourself.
I don't think that it is in Apple's interest that all developers doing their own Rich Text editor class keep doing their own selection handling code, let alone custom drawing of the round and rectangular loupes?! Granted you can try to reverse engineer it such that it comes really close, but that might give users a strange feeling if the selection mechanics differ ever so slightly.
I found that developers are split in two groups:
1) rapes UIWebView with extensive JavaScript code to make it into an editor
2) painstakingly implements the selection mechanics and loupe drawing themselves
So what is the solution here? Keep submitting Radars until Apple adds this missing piece? Or is this actually already existing (as claimed by the aforementioned engineer I met) and we are just too stupid to find how to make use of it, instead resorting to doing everything (but marked text) manually?
Even the smart guys at Omnifocus seem to think that the manual approach is the only one working. This makes me sad, you get such a great protocol, but if you implement it you find it severely crippled. Maybe even intentionally?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不幸的是我的问题的答案是:是的。如果您想在 cusrom 视图上获得选择机制,您必须自己编程。
从 iOS 6 开始,您可以继承 UITextView 并自己绘制文本。根据苹果工程师的说法,这应该为您提供系统选择。
Unfortunately the answer to my question is: YES. If you want to get selection mechanics on a cusrom view you have to program it yourself.
As of iOS 6 you can subclass UITextView and draw the text yourself. According to an Apple engineer this should provide the system selection for you.