c++ win32文本选择检测以及在另一个应用程序中获取和修改
我见过一个用于帮助盲人的应用程序,它称为 JAWS(它充当屏幕阅读器),它可以检测字符串并在许多应用程序中读取它,例如 MS Office 应用程序、记事本、互联网浏览器......等。是否可以检测另一个应用程序中的文本选择?如何?我认为使用了可访问性,但我不知道如何使用它!我可以用热键按键代替选择检测。 _ 我尝试找到解决方案,如下
- 获取顶部和激活的窗口或从鼠标位置获取。
- 从鼠标位置获取其子项。
- 获取选定的文本或设置它。
在ms word中,我使用spy++来检测包含文本的控件,我得到“Microsoft Word Document”
I've seen an application used to help the blind, it's called JAWS (it acts as a screen reader) it detects string and reads it in a lot of applications like MS Office applications, notepad, internet explorer.....etc. Is it possible to detect text selection in another application ?? how?i think accessibility is used but i don't know how to to it! i can replace selection detection with hotekey press.
_ i tried to find a solution as the following
- get the top and activated window or from mouse location.
- get its child from mouse location.
- get the selected text or set it.
In ms word i used spy++ to detect the control that contains the text i get "Microsoft Word Document"
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
没有简单的方法可以做到这一点,因为没有单一一致的方法可以从任意应用程序获取文本(选定的或其他的)。 JAWS 等应用程序通常拥有一整套使用的技术,具体取决于应用程序或控件:
对于 EDIT 和 RichEdit 控件,各种 EM_ messages 有效。
对于 Internet Explorer,HTML DOM可以使用 。
对于 Word,文本对象模型 接口可用于访问文本和格式。其他应用程序可能支持类似的特定于应用程序的模型。
某些(但不是全部)应用和应用框架支持辅助功能 API,例如 UIAutomation 或 IAccessible2,它允许访问有关应用程序中控件的信息,以及有关文本和文本选择的信息。
对于不支持上述任何内容的应用程序,屏幕阅读器通常使用一种称为 离屏模型,一种复杂的技术,涉及拦截所有图形输出调用,并维护在何处绘制内容的内存数据库,以便他们可以查找应该在屏幕上任何点的文本.
由于这些内容都无法单独涵盖所有内容,因此屏幕阅读器通常会尝试所有适合当前应用程序的内容:您几乎可以将屏幕阅读器视为特殊情况代码库,用于从各种应用程序中提取信息。
There's no easy way to do this, because there's no single consistent way to get at text (selected or otherwise) from an arbitrary applications. Apps such as JAWS usually have a whole battery of techniques they use, depending on the app or control:
For EDIT and RichEdit controls, the various EM_ messages work.
For Internet Explorer, the HTML DOM can be used.
For Word, the Text Object Model interfaces can be used to access text and formatting. Other apps may support similar app-specific models.
Some (but not all) apps and app frameworks support Accessibility APIs such as UIAutomation or IAccessible2, which allows access to information about the controls in the app, and also information about text and text selection.
For apps that don't support any of the above, Screenreaders often use a technique called an Off-screen model, a complex technique that involves intercepting all graphical output calls, and maintaining an in-memory database of what was drawn where, so they can look up the text that should be at any point on the screen.
Since none of these covers everything in its own right, screenreaders typically try all of them as appropriate for the current app: you can almost think of a screenreader as being a library of special-case code to extract information from various apps.