MOSS 搜索爬网失败并显示“访问被拒绝...”

发布于 2024-08-27 16:30:24 字数 543 浏览 6 评论 0原文

最近,搜索爬虫停止对我的 MOSS 安装进行工作。爬网日志中的消息是

访问被拒绝。检查默认内容访问帐户是否有权访问此内容,或添加爬网规则以爬网此内容。 (该项目已被删除,因为找不到该项目或爬网程序被拒绝访问它。)

  • 默认内容帐户是我尝试爬网的网站集的管理员。
  • Google 上此错误的几乎每个结果都告诉我添加值为 1 的 DisableLoobackCheck 注册表项。我已完成此操作并重新启动,但错误仍然存​​在。
  • 我的抓取规则屏幕中的“不允许基本身份验证”复选框未选中。

还有其他什么可能导致此错误吗?也许有文件系统或数据库权限?

编辑:所有迹象似乎都表明“DisableLoopbackCheck”应该解决此问题,但它似乎不起作用。当我启用此功能时,我可能做错了什么吗? 我在“我的电脑\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa”中执行此操作,在其中创建一个名为“DisableLoopbackCheck”的新 DWORD 项,并为其指定十六进制值 1。

Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is

Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)

  • The default content account is an admin on the site collection that I am trying to crawl.
  • Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
  • The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.

Is there anything else that could be causing this error? Something with file system or database permissions maybe?

Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

淡水深流 2024-09-03 16:30:24

结果发现与DisableLoopbackCheck无关。问题在于搜索是通过其外部 URL 访问该网站。据推测,您不应该能够使用与从外部访问网站相同的 URL 从服务器内部访问网站,至少在 SP1 之前的 MOSS 中是这样。但我不知何故这样做了大约两年。 MS 支持人员告诉我,他们不太明白它是如何工作的。所以看起来我遇到了一个应该一直表现出来的问题。我不确定是什么导致它突然出现,也许是服务器的一些例行修补。解决方案是扩展 Web 应用程序,以便可以通过计算机名称在内部访问它,然后将爬虫程序指向该应用程序。

It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文