我们在网站上实施了新的“访问次数”功能,当访问我们网站上的公司简介时,该功能会在“视图”数据库中保存一行。这是使用每次加载页面(公司简介)时运行的服务器端“/addVisit”函数来完成的。不幸的是,这意味着我们昨晚有超过 400 次来自 Googlebot 的访问。
由于我们确实希望 Google 为这些页面建立索引,因此我们无法使用 robots.txt 在这些页面上排除 Googlebot。
我还读到,使用 jQuery $.get() 运行此函数不会停止 Googlebot。
唯一可行的解决方案是排除已知的机器人 IP,还是有其他选择?
或者可能使用 jQuery $.get(/addVisit) 与 robots.txt 排除 /addVisit 将阻止 googlebot 和其他机器人运行此功能?
We have implemented a new Number of Visits function on our site that saves a row in our Views database when a company profile on our site is accessed. This is done using a server-side "/addVisit" function that is run each time a page (company profile) is loaded. Unfortunately, this means we had 400+ Visits from Googlebot last night.
Since we do want Google to index these pages, we can't exclude Googlebot on these pages using robots.txt.
I have also read that running this function using a jQuery $.get() will not stop Googlebot.
Is the only working solution is to exclude known bot IPs or are there options?
Or possibly using a jQuery $.get(/addVisit) with a robots.txt exclude /addVisit will stop googlebot and other bots from running this function?
发布评论
评论(2)
在您网站的根目录中创建一个
robots.txt
文件,并添加:您还可以使用
*
代替Google
,这样 < code>/addvisit 不会被任何引擎索引。搜索引擎开始总是寻找/robots.txt
。如果此文件存在,它们会解析内容并遵守应用的限制。有关详细信息,请参阅 http://www.robotstxt.org/robotstxt.html。
Create a
robots.txt
file in the root directory of your website, and add:You can also use
*
instead ofGoogle
, so that/addvisit
doesn't get indexed at by any engine. Search engines start always looking for/robots.txt
. If this file exists, they parse the contents and respect the applied restrictions.For more information, see http://www.robotstxt.org/robotstxt.html.
如果您通过服务器端 HTTP 请求处理计数,则可以过滤任何包含“Googlebot”一词的用户代理。快速的 Google 搜索向我展示了几个 Googlebot 用户代理示例:
If you're handling your count by a server side HTTP request, you could filter any user agents that contain the word 'Googlebot'. A quick Google search shows me a couple of Googlebot user agent examples: