更改 WordPress 网站中的 robots.txt 文件导致 SEO 混乱

发布于 2024-08-25 04:17:05 字数 638 浏览 1 评论 0原文

我最近使用 WordPress 插件编辑了网站中的 robots.txt 文件。然而,自从我这样做之后,谷歌似乎已经从他们的搜索页面中删除了我的网站。如果我能得到专家的意见来解释为什么会这样,以及可能的解决方案,我将不胜感激。我最初这样做是为了通过限制谷歌访问的页面来提高我的搜索排名。

这是我在 WordPress 中的 robots.txt 文件:

User-agent: *
Disallow: /cgi-bin
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /trackback
Disallow: /feed
Disallow: /comments
Disallow: /category/*/*
Disallow: */trackback
Disallow: */feed
Disallow: */comments
Disallow: /*?*
Disallow: /*?
Allow: /wp-content/uploads

Sitemap: http://www.instant-wine-cellar.co.uk/wp-content/themes/Wineconcepts/Sitemap.xml

I recently edited the robots.txt file in my site using a wordpress plugin. However, since i did this, google seems to have removed my site from their search page. I'd appreciate if I could get an expert opinion on why this is so, and a possible solution. I'd initially done it to increase my search ranking by limiting the pages being accessed by google.

This is my robots.txt file in wordpress:

User-agent: *
Disallow: /cgi-bin
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /trackback
Disallow: /feed
Disallow: /comments
Disallow: /category/*/*
Disallow: */trackback
Disallow: */feed
Disallow: */comments
Disallow: /*?*
Disallow: /*?
Allow: /wp-content/uploads

Sitemap: http://www.instant-wine-cellar.co.uk/wp-content/themes/Wineconcepts/Sitemap.xml

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

巴黎夜雨 2024-09-01 04:17:05

对于 WordPress,这是一个很好的 robots.txt 指令。如果您希望上传内容被编入索引,请添加 Allow: /wp-content/uploads,但这没有意义,因为您的所有图像、pdf 等都包含在您的帖子和页面中并在那里建立索引。

User-agent: *
Allow: /
Disallow: /*?s=
Disallow: /wp-admin/*
Disallow: /wp-content/*
Disallow: /wp-includes/*
Disallow: /wp-content/cache
Disallow: /wp-content/themes/*
Disallow: /trackback
Disallow: /comments
Disallow: /category/
Disallow: */trackback
Disallow: */comments

但最关键的信息位于您的页面源代码中:

这意味着您在仪表板/设置/隐私中设置了隐私这甚至在所有搜索机器人到达 robots.txt 之前就将其阻止。

一旦您获得了良好的 robots.txt 文件并更改了 Wordpress 隐私设置,即可使用 Google 网站管理员工具并调高抓取速度,以便 Google 更快地访问该网站。

This is a good robots.txt directive for WordPress. Add the Allow: /wp-content/uploads if you want uploads to be indexed, but that doesn't make sense, as all your images, pdfs, etc., are included in your posts and pages and are indexed there.

User-agent: *
Allow: /
Disallow: /*?s=
Disallow: /wp-admin/*
Disallow: /wp-content/*
Disallow: /wp-includes/*
Disallow: /wp-content/cache
Disallow: /wp-content/themes/*
Disallow: /trackback
Disallow: /comments
Disallow: /category/
Disallow: */trackback
Disallow: */comments

But the most critcal bit of info is in your page source:

<meta name='robots' content='noindex,nofollow' />

That means you have privacy set in Dashboard/Settings/Privacy and that's blocking all search bots even before they get to robots.txt.

Once you get a good robots.txt file and change the Wordpress privacy setting, so to Google webmaster tools and turn up your crawl rate to have Google hit the site faster.

豆芽 2024-09-01 04:17:05

注意:“您阻止了所有机器人,因为您在用户代理:* 之后缺少关键的允许:/”是不正确。默认情况下,robots.txt将允许所有抓取,您通常不需要指定任何“允许”指令。

但是,“noindex”机器人元标记可能是不为网站内容建立索引的原因。

此外,robots.txt 目前会阻止所有抓取,以便搜索引擎无法判断该网站是否可以再次编制索引。如果您希望再次将该网站编入索引,则需要从 robots.txt 文件中删除“disallow: /”。您可以在 Google 网站站长工具中验证这一点,方法是查找最新的 robots.txt 文件或使用“Fetch as Googlebot”功能来测试网站上页面的抓取。

Note: "You blocked all bots because you're missing the critical Allow: / after User-agent: *" is incorrect. By default, the robots.txt will allow all crawling, you generally do not need to specify any "allow" directives.

However, the "noindex" robots meta tag would be a reason not to index content the site.

Additionally, the robots.txt currently blocks all crawling so that search engines can't tell that the site can be indexed again. If you wish to have the site indexed again, you need to remove the "disallow: /" from the robots.txt file. You can verify that in Google's Webmaster Tools, either by looking up the latest robots.txt file or by using the "Fetch as Googlebot" feature to test crawling of a page on the site.

耶耶耶 2024-09-01 04:17:05

我建议您使用 google web master 工具 robots.txt 检查器并输入正在消失的 URL,并确保 google 仍然会访问那里。

这样你就可以验证它是你的robots.txt还是其他东西

I suggest you use the google web master tools robots.txt checker and put in URLs that are disappearing and ensure that google would still go there.

That way you can verify if it is your robots.txt or something else

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文