防止屏幕刮擦

发布于 2024-07-11 02:05:45 字数 1459 浏览 7 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(21

烙印 2024-07-18 02:05:45

你无法阻止它。

You can’t prevent it.

静若繁花 2024-07-18 02:05:45

我在这里写了一篇关于此的博客文章: http://blog.screen-scraper.com/2009/08/17/further-thoughts-on-hindering-screen-scraping/

换句话来说:

如果您在互联网上发布信息,有人可以获得它,只是他们想投入多少资源的问题。 提高所需资源的一些方法包括:

图灵测试

图灵测试最常见的实现是旧的验证码,它试图确保人类读取图像中的文本,并将其输入表单中。

我们发现大量网站实施的验证码非常弱,只需几分钟即可绕过。 另一方面,有一些非常好的图灵测试实现,我们选择不处理它们,但复杂的 OCR 有时可以克服这些,或者许多公告板垃圾邮件发送者有一些聪明的技巧来绕过这些。

图像数据

有时您知道数据的哪些部分是有价值的。 在这种情况下,用图像替换此类文本就变得合理了。 与图灵测试一样,有 OCR 软件可以读取它,我们没有理由不能保存图像并让别人稍后读取它。

然而,通常情况下,将数据列为图像而没有替代文本违反了《美国残疾人法案》(ADA),并且可以通过给公司法律部门打几个电话来解决。

代码混淆

使用 JavaScript 函数之类的东西在页面上显示数据(尽管它不在 HTML 源代码中的任何位置)是一个好技巧。 其他示例包括在页面中添加大量无关的注释,或者拥有一个以不可预测的方式对事物进行排序的交互式页面(我想到的示例使用 CSS 来使显示内容保持不变,无论代码的排列如何。

) CSS Sprites

最近我们遇到一些情况,页面中有一张包含数字和字母的图像,并使用 CSS 只显示他们想要的字符。 这实际上是前两种方法的组合。 首先,我们必须获取主图像并读取其中的字符,然后我们需要读取站点中的 CSS 并确定每个标签指向的字符。

虽然这非常聪明,但我怀疑这也会违反 ADA,尽管我还没有测试过。

限制搜索结果

我们想要获取的大多数数据都隐藏在某种形式的后面。 有些很简单,提交空白表格即可得到所有结果。 有些需要在表格中添加星号或百分比。 最难的是那些每次查询只能提供这么多结果的查询。 有时我们只是创建一个循环,将字母表中的字母提交到表单,但如果这太笼统,我们必须创建一个循环来提交 2 或 3 个字母的所有组合,即 17,576 个页面请求。

IP 过滤

有时,勤奋的网站管理员会注意到来自特定 IP 地址的大量页面请求,并阻止来自该域的请求。 然而,有多种方法可以通过备用域传递请求,因此这种方法通常不是很有效。

网站修补

抓取总是会关闭 HTML 中的某些内容。 有些网站拥有不断调整 HTML 的资源,以便任何抓取的内容都始终过时。 因此,针对不断变化的条件不断更新刮擦数据变得不划算。

I've written a blog post about this here: http://blog.screen-scraper.com/2009/08/17/further-thoughts-on-hindering-screen-scraping/

To paraphrase:

If you post information on the internet someone can get it, it's just a matter of how many resources they want to invest. Some means to make the required resources higher are:

Turing tests

The most common implementation of the Turning Test is the old CAPTCHA that tries to ensure a human reads the text in an image, and feeds it into a form.

We have found a large number of sites that implement a very weak CAPTCHA that takes only a few minutes to get around. On the other hand, there are some very good implementations of Turing Tests that we would opt not to deal with given the choice, but a sophisticated OCR can sometimes overcome those, or many bulletin board spammers have some clever tricks to get past these.

Data as images

Sometimes you know which parts of your data are valuable. In that case it becomes reasonable to replace such text with an image. As with the Turing Test, there is OCR software that can read it, and there’s no reason we can’t save the image and have someone read it later.

Often times, however, listing data as an image without a text alternate is in violation of the Americans with Disabilities Act (ADA), and can be overcome with a couple of phone calls to a company’s legal department.

Code obfuscation

Using something like a JavaScript function to show data on the page though it’s not anywhere in the HTML source is a good trick. Other examples include putting prolific, extraneous comments through the page or having an interactive page that orders things in an unpredictable way (and the example I think of used CSS to make the display the same no matter the arrangement of the code.)

CSS Sprites

Recently we’ve encountered some instances where a page has one images containing numbers and letters, and used CSS to display only the characters they desired. This is in effect a combination of the previous 2 methods. First we have to get that master-image and read what characters are there, then we’d need to read the CSS in the site and determine to what character each tag was pointing.

While this is very clever, I suspect this too would run afoul the ADA, though I’ve not tested that yet.

Limit search results

Most of the data we want to get at is behind some sort of form. Some are easy, and submitting a blank form will yield all of the results. Some need an asterisk or percent put in the form. The hardest ones are those that will give you only so many results per query. Sometimes we just make a loop that will submit the letters of the alphabet to the form, but if that’s too general, we must make a loop to submit all combination of 2 or 3 letters–that’s 17,576 page requests.

IP Filtering

On occasion, a diligent webmaster will notice a large number of page requests coming from a particular IP address, and block requests from that domain. There are a number of methods to pass requests through alternate domains, however, so this method isn’t generally very effective.

Site Tinkering

Scraping always keys off of certain things in the HTML. Some sites have the resources to constantly tweak their HTML so that any scrapes are constantly out of date. Therefore it becomes cost ineffective to continually update the scrape for the constantly changing conditions.

守望孤独 2024-07-18 02:05:45

因此,一种方法是混淆代码(rot13 或其他),然后在页面中使用一些 javascript 执行类似 document.write(unobfuscate(obfuscated_pa​​ge)) 之类的操作。 但这完全震撼了搜索引擎(可能!)。

当然,这实际上也不能阻止那些想要窃取您数据的人,但它确实让窃取您的数据变得更加困难。

一旦客户端获得数据,游戏就差不多结束了,因此您需要在服务器端查看一些内容。

鉴于搜索引擎基本上是屏幕抓取工具,事情就很困难。 您需要了解屏幕抓取工具和屏幕抓取工具之间的区别。 当然,也只有普通的人类用户。 因此,这归结为一个问题:服务器如何有效地将请求分类为来自屏幕抓取工具或坏< /em> 屏幕抓取工具。

因此,首先要查看您的日志文件,看看是否存在某种模式可以让您有效地对请求进行分类,然后在确定模式时查看是否存在某种 屏幕抓取工具在了解此分类后,可以将自己伪装成人类优秀屏幕抓取工具。

一些想法:

  • 您也许可以通过 IP 地址来确定好的屏幕抓取工具。
  • 您可以通过并发连接数、每个时间段的连接总数来确定抓取工具与人类的比较、访问模式等。

显然这些并不理想或万无一失。 另一种策略是确定您可以采取哪些措施,这些措施对人类来说不引人注目,但(可能)对抓取者来说很烦人。 一个例子可能是减慢请求数量。 (取决于请求的时间紧迫性。如果他们实时抓取,这会影响他们的最终用户)。

另一方面是着眼于更好地为这些用户提供服务。 显然,他们正在抓取数据,因为他们想要数据。 如果您为他们提供一种简单的方法来直接获取有用格式的数据,那么他们会更容易做到这一点,而不是屏幕抓取。 如果有一种简单的方法,那么就可以规范对数据的访问。 例如:给请求者一个唯一的key,然后限制每个key的请求数量,避免服务器过载,或者按1000个请求收费等等。

当然还是有人会想敲诈你,然后还有可能还有其他抑制激励的方法,但它们可能开始变得非技术性,并且需要诉诸法律途径。

So, one approach would be to obfuscate the code (rot13, or something), and then have some javascript in the page that do something like document.write(unobfuscate(obfuscated_page)). But this totally blows away search engines (probably!).

Of course this doesn’t actually stop someone who wants to steal your data either, but it does make it harder.

Once the client has the data it is pretty much game over, so you need to look at something on the server side.

Given that search engines are basically screen scrapers things are difficult. You need to look at what the difference between the good screen scrapers and the bad screen scrapers are. And of course, you have just the normal human users as well. So this comes down to a problem of how can you on the server effectively classify as request as coming from a human, a good screen scraper, or a bad screen scraper.

So, the place to start would be looking at your log-files and seeing if there is some pattern that allows you to effectively classify requests, and then on determining the pattern see if there is some way that a bad screen scraper, upon knowing this classification, could cloak itself to appear like a human or good screen scraper.

Some ideas:

  • You may be able to determine the good screen scrapers by IP address(es)..
  • You could potentially determine scraper vs. human by number of concurrent connections, total number of connections per time-period, access pattern, etc.

Obviously these aren’t ideal or fool-proof. Another tactic is to determine what measures can you take that are unobtrusive to humans, but (may be) annoying for scrapers. An example might be slowing down the number of requests. (Depends on the time criticality of the request. If they are scraping in real-time, this would effect their end users).

The other aspect is to look at serving these users better. Clearly they are scraping because they want the data. If you provide them an easy way in which to directly obtain the data in a useful format then that will be easier for them to do instead of screen scraping. If there is an easy way then access to the data can be regulated. E.g: give requesters a unique key, and then limit the number of requests per key to avoid overload on the server, or charge per 1000 requests, etc.

Of course there are still people who will want to rip you off, and then there are probably other ways to disincentivise, bu they probably start being non-technical, and require legal avenues to be persued.

没有心的人 2024-07-18 02:05:45

防止屏幕抓取非常困难,但如果你真的非常想要,你可以
经常更改 HTML 或经常更改 HTML 标记名称。 大多数屏幕抓取工具的工作原理是使用字符串与标签名称进行比较,或者使用正则表达式搜索特定字符串等。如果您要更改底层 HTML,他们将需要更改其软件。

It's pretty hard to prevent screen scraping but if you really, really wanted to you could
change your HTML frequently or change the HTML tag names frequently. Most screen scrapers work by using string comparisons with tag names, or regular expressions searching for particular strings etc. If you are changing the underlying HTML it will make them need to change their software.

欢你一世 2024-07-18 02:05:45

这将是很难预防的。 问题是网页意味着由程序(您的浏览器)解析,因此它们非常容易被抓取。 您能做的最好的事情就是保持警惕,如果您发现您的网站被抓取,请阻止违规程序的 IP。

It would be very difficult to prevent. The problem is that Web pages are meant to be parsed by a program (your browser), so they are exceptionally easy to scrape. The best you can do is be vigilant, and if you find that your site is being scraped, block the IP of the offending program.

只是我以为 2024-07-18 02:05:45

不要阻止它,而是检测它并报复那些尝试的人。

例如,让您的网站保持开放状态以供下载,但传播一些理智用户不会点击的链接。 如果有人点击该链接,点击速度对于人类或其他可疑行为来说太快,请立即做出反应以阻止用户尝试。 如果有登录系统,请阻止该用户并就不可接受的行为与他联系。 这应该确保他们不会再尝试。 如果没有登录系统,则返回一个大警告,其中包含指向同一警告的虚假链接,而不是实际页面。

这确实适用于像 Safari Bookshelf 这样的东西,用户复制粘贴一段代码或一个章节并发送给同事是可以的,但完整下载一本书是不可接受的。 我非常确定,当有人试图下载他们的书籍时,他们会检测到,封锁该帐户,并向罪魁祸首表明,如果他再次尝试,他可能会遇到真正的麻烦。

打个非IT类比,如果机场安检只是让携带武器登机变得困难,那么恐怖分子就会想尽办法偷偷溜过安检。 但事实上,仅仅尝试就会让你陷入深深的麻烦,因此没有人会尝试找到偷偷摸摸的方法。 被抓住并受到惩罚的风险太高了。 就做同样的事吧。 如果可能的话。

Don't prevent it, detect it and retaliate those who try.

For example, leave your site open to download but disseminate some links that no sane user would follow. If someone follows that link, is clicking too fast for a human or other suspicious behaviour, react promptly to stop the user from trying. If there is a login system, block the user and contact him regarding unacceptable behaviour. That should make sure they don't try again. If there is no login system, instead of actual pages, return a big warning with fake links to the same warning.

This really applies for things like Safari Bookshelf where a user copy-pasting a piece of code or a chapter to mail a colleague is fine while a full download of book is not acceptable. I'm quite sure that they detect when some tries to download their books, block the account and show the culprit that he might get in REAL trouble should he try that again.

To make a non-IT analogy, if airport security only made it hard to bring weapons on board of planes, terrorists would try many ways to sneak one past security. But the fact that just trying will get you in deep trouble make it so that nobody is going to try and find the ways to sneak one. The risk of getting caught and punished is too high. Just do the same. If possible.

笑饮青盏花 2024-07-18 02:05:45

根据定义,搜索引擎是屏幕抓取工具。 因此,您所做的大多数使屏幕抓取变得更加困难的事情也会使内容索引变得更加困难。

行为良好的机器人会尊重您的 robots.txt 文件。
您还可以阻止已知违规者的 IP,或者在您的内容未发送到已知良好的机器人时添加混淆的 HTML 标签。 但这是一场必败之战。 我建议对已知违法者采取诉讼途径。

您还可以隐藏内容中的识别数据,以便更轻松地追踪违规者。 众所周知,百科全书会添加虚构条目来帮助检测和起诉版权侵权者。

Search engines ARE screen scrapers by definition. So most things you do to make it harder to screen scrape will also make it harder to index your content.

Well behaved robots will honour your robots.txt file.
You could also block the IP of known offenders or add obfuscating HTML tags into your content when it's not sent to a known good robot. It's a losing battle though. I recommend the litigation route for known offenders.

You could also hide identifying data in the content to make it easier to track down offenders. Encyclopaedias have been known to to add Fictitious entries to help detect and prosecute copyright infringers.

樱花落人离去 2024-07-18 02:05:45

预防?——不可能,但你可以让它变得更难。

抑制?——有可能,但你不会喜欢这个答案:为感兴趣的各方提供批量数据导出。

从长远来看,如果您发布数据,所有竞争对手都将拥有相同的数据,因此您需要其他方式来使您的网站多样化(例如更频繁地更新、使其更快或更易于使用)。 如今,甚至谷歌也在使用用户评论等爬取信息,您认为对此可以做什么? 起诉他们并从他们的索引中删除?

Prevent? -- impossible, but you can make it harder.

Disincentivise? -- possible, but you won't like the answer: provide bulk data exports for interested parties.

On the long run, all your competitors will have the same data if you publish it, so you need other means of diversifying your website (e.g. update it more frequently, make it faster or easier to use). Nowdays even Google is using scraped information like user reviews, what do you think you can do about it? Sue them and get booted from their index?

神仙妹妹 2024-07-18 02:05:45

最好的投资回报可能是添加随机换行符和多个空格,因为大多数屏幕抓取工具将 HTML 作为文本而不是 XML 进行工作(因为大多数页面不会解析为有效的 XML)。

浏览器会忽略空格,因此您的用户不会注意到它们的

  Price : 1
  Price :    2
  Price\n:\n3

不同。 (这来自我使用 AWK 抓取政府网站的经验)。

下一步是在随机元素周围添加标签来弄乱 DOM。

The best return on investment is probably to add random newlines and multiple spaces, since most screen scrapers work from the HTML as text rather than as a XML (since most pages don't parse as valid XML).

The browser ignores whitespace, so your user's don't notice that

  Price : 1
  Price :    2
  Price\n:\n3

are different. (this comes from my experience scraping government sites with AWK).

Next step is adding tags around random elements to mess up the DOM.

七度光 2024-07-18 02:05:45

一种方法是创建一个接受文本和位置的函数,然后服务器端为文本中的每个字符生成 x, y 位置,以随机顺序生成包含字符的 div。 生成一个 javascript,然后将每个 div 放置在屏幕上的正确位置。 在屏幕上看起来不错,但在后面的代码中,如果您不通过 javascript 进行抓取(每个请求都可以动态更改),则没有真正的顺序来获取文本。

太多的工作并且可能有很多怪癖,这取决于网站上有多少文本以及用户界面以及其他内容有多复杂。

One way is to create an function that takes text and position and then Serverside generate x, y pos for every character in the text, generate divs in random order containing the characters. Generate a javascript that then posision every div on right place on screen. Looks good on screen but in code behind there is no real order to fetch the text if you dont go throuh the trouble to scrape via your javascript (that can be changed dynamically every request)

Too much work and have possibly many quirks, it depends on how much text and how complicate UI you have on the site and other things.

┈┾☆殇 2024-07-18 02:05:45

我认为很少有网站的目的是发布(即公开)信息。

  • 当然,您可以在登录后隐藏您的数据,但这是一个非常具体的解决方案。

  • 我见过一些应用程序只会在请求标头指示网络浏览器(而不是匿名或“jakarta”)的情况下提供内容,但这很容易被欺骗,并且您会失去一些真正的人。

  • 然后,您可能会接受一些报废,但如果来自同一 IP 的请求速率过高,则不提供内容,从而使他们的生活变得难以克服。 这会受到不完全覆盖的影响,但更重要的是存在“AOL 问题”,即一个 IP 可以覆盖许多许多 独特的人类用户。

最后两种技术也很大程度上依赖于流量拦截技术,这是不可避免的性能和/或财务支出。

Very few I think given the intention of any site is to publish (i.e. to make public) information.

  • You can hide your data behind logins of course, but that's a very situational solution.

  • I've seen apps which would only serve up content where the request headers indicated a web browser (rather than say anonymous or "jakarta") but that's easy to spoof and you'll lose some genuine humans.

  • Then there's the possibility that you accept some scrapage but make life insurmountably hard for them by not serving content if requests are coming from the same IP at too high a rate. This suffers from not being full coverage but more importantly there is the "AOL problem" that an IP can cover many many unique human users.

Both of the last two techniques also depend heavily on having traffic intercepting technology which is an inevitable performance and/or financial outlay.

倦话 2024-07-18 02:05:45

鉴于大多数网站都希望获得良好的搜索引擎排名,而搜索引擎都是抓取机器人,因此您可以做的任何事情都不会损害您的搜索引擎优化。

您可以创建一个完全ajax加载的网站或基于flash的网站,这将使机器人变得更加困难,或者将所有内容隐藏在登录后面,这将使事情变得更加困难,但是这些方法中的任何一种都会损害您的搜索排名,并可能惹恼你的用户,如果有人真的想要它,他们会找到办法。

拥有无法被抓取的内容的唯一保证方法是不在网络上发布它。 网络的本质是,当你把它放在那里时,它就在那里。

Given that most sites want a good search engine ranking, and search engines are scraper bots, there's not much you can do that won't harm your SEO.

You could make an entirely ajax loaded site or flash based site, which would make it harder for bots, or hide everything behind a login, which would make it harder still, but either of these approaches is going to hurt your search rankings and possibly annoy your users, and if someone really wants it, they'll find a way.

The only guaranteed way of having content that can't be scraped is to not publish it on the web. The nature of the web is such that when you put it out there, it's out there.

2024-07-18 02:05:45

如果您想要保护的信息不多,您可以将其即时转换为图片。 然后他们必须使用 OCR,这样可以更轻松地抓取另一个网站而不是您的网站。

If its not much information you want to protect you can convert it to a picture on the fly. Then they must use OCR wich makes it easier to scrape another site instead of yours..

予囚 2024-07-18 02:05:45

您可以检查访问您网站的客户的用户代理。 一些第三方屏幕抓取程序有自己的用户代理,因此您可以阻止它。 然而,好的屏幕抓取工具会欺骗他们的用户代理,因此您将无法检测到它。 如果您确实尝试阻止任何人,因为您不想阻止合法用户,请务必小心:)

您所能期望的最好结果就是阻止那些使用不够智能的屏幕抓取工具来更改其用户代理的人。

You could check the user agent of clients coming to your site. Some third party screen scraping programs have their own user agent so you could block that. Good screen scrapers however spoof their user agent so you won't be able to detect it. Be careful if you do try to block anyone because you don't want to block a legitimate user :)

The best you can hope for is to block people using screen scrapers that aren't smart enough to change their user agent.

渡你暖光 2024-07-18 02:05:45

我尝试“屏幕抓取”一些 PDF 文件一次,结果发现它们实际上以半随机顺序将字符放入 PDF 中。 我猜想 PDF 格式允许您为每个文本块指定一个位置,并且他们使用了非常小的块(小于一个单词)。 我怀疑有问题的 PDF 并没有试图阻止屏幕抓取,而是用渲染引擎做了一些奇怪的事情。

我想知道你是否可以做这样的事情。

I tried to "screen scrape" some PDF files once, only to find that they'd actually put the characters in the PDF in semi-random order. I guess the PDF format allows you to specify a location for each block of text, and they'd used very small blocks (smaller than a word). I suspect that the PDFs in question weren't trying to prevent screen scraping so much as they were doing something weird with their render engine.

I wonder if you could do something like that.

听风念你 2024-07-18 02:05:45

您可以将所有内容放入闪存中,但在大多数情况下,这会惹恼许多合法用户,包括我自己。 它可以用于某些信息,例如股票价格或图表。

You could put everything in flash, but in most cases that would annoy many legitimate users, myself included. It can work for some information such as stock prices or graphs.

眼波传意 2024-07-18 02:05:45

我怀疑没有好的方法可以做到这一点。

我想您可以通过一种机制来运行所有内容,将文本转换为使用验证码样式字体和布局呈现的图像,但这会破坏 SEO 并惹恼您的用户。

I suspect there is no good way to do this.

I suppose you could run all your content through a mechanism to convert text to images rendered using a CAPTCHA-style font and layout, but that would break SEO and annoy your users.

梦里°也失望 2024-07-18 02:05:45

好吧,在将内容从服务器推送到客户端之前,请删除所有 \r\n、\n、\t 并将所有内容替换为仅一个空格。 现在您的 html 页面中有 1 行长行。 谷歌就是这么做的。 这将使其他人难以阅读您的 html 或 JavaScript。
然后您可以创建空标签并随机将它们插入各处。 该意志没有任何影响。
然后您可以记录所有 IP 以及它们访问您网站的频率。 如果您每次都看到一个准时到达的人,请将其标记为机器人并阻止它。
如果您希望搜索引擎介入,请确保不要打扰它们。

希望这可以帮助

Well, before you push the content from the server to the client, remove all the \r\n, \n, \t and replace everything with nothing but a single space. Now you have 1 long line in your html page. Google does this. This will make it hard for others to read your html or JavaScript.
Then you can create empty tags and randomly insert them here and there. The will have no effect.
Then you can log all the IPs and how often they hit your site. If you see one that comes in on time everytime, you mark it as robot and block it.
Make sure you leave the search engines alone if you want them to come in.

Hope this helps

烟花易冷人易散 2024-07-18 02:05:45

使用 iText 库 根据数据库信息创建 PDF 怎么样? 与 Flash 一样,它不会使抓取变得不可能,但可能会使其变得更加困难。

内尔斯

What about using the iText library to create PDFs out of your database information? As with Flash, it won't make scraping impossible, but might make it a little more difficult.

Nels

孤君无依 2024-07-18 02:05:45

老问题,但是 - 添加交互性会使屏幕抓取变得更加困难。 如果数据不在原始响应中 - 例如,您在页面加载后发出 AJAX 请求来填充 div - 大多数抓取工具将看不到它。

例如,我使用 mechanize 库来进行抓取。 Mechanize 不执行 Javascript - 它不是一个现代浏览器 - 它只是解析 HTML,让我跟踪链接并提取文本等。每当我遇到一个大量使用 Javascript 的页面时,我都会感到窒息 - 没有完整的脚本浏览器(支持全部 Javascript)我被卡住了。

这也是导致高度交互式 Web 应用程序的自动化测试变得如此困难的同一问题。

Old question, but- adding interactivity makes screen scraping much more difficult. If the data isn't in the original response- say, you made an AJAX request to populate a div after page load- most scrapers won't see it.

For example- I use the mechanize library to do my scraping. Mechanize doesn't execute Javascript- it isn't a modern browser- it just parses HTML, let's me follow links and extract text, etc. Whenever I run into a page that makes heavy use of Javascript, I choke- without a fully scripted browser (that supports the full gamut of Javascript) I'm stuck.

This is the same issue that makes automated testing of highly interactive web applications so difficult.

苦行僧 2024-07-18 02:05:45

我从没想过阻止打印屏幕是可能的...那么你知道什么,请查看新技术 - sivizion.com。 凭借他们的视频缓冲技术,无法打印屏幕,很酷,真的很酷,尽管很难使用......我认为他们也获得了该技术的许可,请检查一下。 (如果我错了,请在这里发布如何被黑客攻击。)
在这里找到它:如何防止打印屏幕

I never thought that preventing print screen would be possible... well what do you know, checkout the new tech - sivizion.com. With their video buffer technology there is no way to do a print screen, cool, really cool, though hard to use ... I think they license the tech also, check it out. (If I am wrong please post here how it can be hacked.)
Found it here: How do I prevent print screen

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文