使用带链接的 MVC3 AntiforgeryToken?

发布于 2024-12-23 14:20:01 字数 275 浏览 4 评论 0原文

在我的 MVC3 站点上,我有一个包含多个链接的页面。这些链接都链接回我网站上的路线,具有不同的 ID 值,结构如下:

即:www.mysite.com/links/33351/3

我想利用MVC3 的 antiforgerytoken 机制使我可以确保所有来自链接索引页的 www.mysite.com/links/33351/3 请求。

我熟悉如何将令牌添加到表单中,但是这些都是独立的链接。

我怎样才能做到这一点?

On my MVC3 site, I have a page that contains several links. These links all link back to a route on my site, with different ID values and are structured as such:

ie: www.mysite.com/links/33351/3

I'd like to take advantage of the MVC3's antiforgerytoken mechanism so that I can ensure that all requests to www.mysite.com/links/33351/3 from the link index page.

I'm familiar with how to add the token to a form, however these are all stand-alone links.

How can I accomplish this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

北渚 2024-12-30 14:20:01

您不能将 AntiForgeryToken 用于 GET 请求(即单击链接)。有关详细信息 - 请参阅 在 HTTP GET 中使用 MVC3 的 AntiForgeryToken避免 Javascript CSRF 漏洞

一种解决方案是检查 Request.UrlReferrer 确保它们来自您的索引页,但这远不可靠。

也许您可以解释为什么要施加此限制,我也许可以提出替代方案。

You can't use the AntiForgeryToken for GET requests (i.e. clicking a link). For details - see Using MVC3's AntiForgeryToken in HTTP GET to avoid Javascript CSRF vulnerability.

One solution is to check the Request.UrlReferrer to ensure they came from your index page, but this is far from reliable.

Perhaps you could explain why you want to impose this restriction and I may be able to propose an alternative.

一城柳絮吹成雪 2024-12-30 14:20:01

感谢上面的评论帮助我解决了这个问题。

本质上,我创建了一个 Javascript 函数来处理项目点击。我页面上的每个链接都有一个 ID,因此我只需将 ID 传递给提交表单的 JS 函数:

<script type="text/javascript"> <!--
    function doClick(itemID) {
        document.getElementById('hid_ItemID').value = itemID;

        // add whatever additional js type processing needed here - ie. analytics, etc.

        document.forms[0].submit();
    }
//-->
</script>

表单本身包含 MVC 防伪令牌标记:

@using (Html.BeginForm("DoRequest", "DoItemClickRq", FormMethod.Post, new { target = "_blank" }))
{
    @Html.AntiForgeryToken()
    <input type="hidden" id="hid_ItemID" name="hid_ItemID" value="" />
.
.
.

控制器方法:

    [ValidateAntiForgeryToken]
    [HttpPost]
    public ActionResult DoItemRequest()
    {
        int itemListID = 0;
        int pagePositionNumber = 0;
        int.TryParse(Request["hid_ItemID"], out itemListID);



Thanks to the comments above which helped me solve this.

Essentially, I created a Javascript function to process the item clicks. Each link on my page has an ID, so I simply passed the ID through to the JS function which submits the form:

<script type="text/javascript"> <!--
    function doClick(itemID) {
        document.getElementById('hid_ItemID').value = itemID;

        // add whatever additional js type processing needed here - ie. analytics, etc.

        document.forms[0].submit();
    }
//-->
</script>

The form itself contains the MVC anti-forgery token tag:

@using (Html.BeginForm("DoRequest", "DoItemClickRq", FormMethod.Post, new { target = "_blank" }))
{
    @Html.AntiForgeryToken()
    <input type="hidden" id="hid_ItemID" name="hid_ItemID" value="" />
.
.
.

The controller method:

    [ValidateAntiForgeryToken]
    [HttpPost]
    public ActionResult DoItemRequest()
    {
        int itemListID = 0;
        int pagePositionNumber = 0;
        int.TryParse(Request["hid_ItemID"], out itemListID);

.
.
.

缘字诀 2024-12-30 14:20:01
  1. 为了良好的设计实践,应用程序中可通过 HTTP GET 访问的所有 URL 都应该是幂等的,这意味着无论访问多少次,它们都不应更改状态。听起来这可能是问题的根源,具体取决于“导航到链接执行一些维护”的解释。或者您可能担心由于维护而导致的系统负载。这就是 CSRF 方法通常避免处理 GET 请求的原因之一。
  2. 在 URL 中公开 CSRF 令牌(如会话 ID)是不好的安全做法。 URL 可能会泄漏到日志文件、代理服务器中,尤其是当您的站点不使用 100% SSL 时。

网络爬虫是标准的行为良好的爬虫吗?如果是这样,为什么不直接使用 robots.txt 来约束抓取行为呢?

如果这还不够,也许您需要施加某种工作流限制以防止深层链接访问(例如,控制器会拒绝在不首先执行步骤 1 和 2 的情况下访问具有会话 ID A 的链接 X)。

  1. For good design practice, all URLs in your application accessible via HTTP GET should be idempotent, meaning they should not change state no matter how many times they are accessed. It sounds like this may be the root of your problem, depending on the interpretation of "Navigating to the link performs some maintenance". Or you may be concerned about system load due to the maintenance. That is one reason why CSRF approaches have typically avoided handling GET requests.
  2. Exposing CSRF tokens in URLs, like session IDs, is bad security practice. URLs can leak into logfiles, proxy servers, especially if your site doesn't use 100% SSL.

Are the web crawlers standard well-behaved ones? If so, why not just use robots.txt to constrain the crawling behavior?

If that is not sufficient, perhaps you need to impose some sort of workflow restriction to prevent deep-link access (e.g. access to link X with session id A without going through steps 1 and 2 first is denied by your controller).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文