通过开放图向 Facebook 提交对象不起作用,但在 Facebook 的对象调试器中测试 URL 后又可以工作?

发布于 2024-12-13 05:52:02 字数 2449 浏览 3 评论 0原文

我希望允许我的网络应用程序的用户能够从一个页面(main_page)将多个对象发布到他们的时间线。

我已经存储了用户的访问令牌。

我试图提交的页面上的标签,url 是 page_url:

<meta property="fb:app_id"      content="my_app_id" /> 
<meta property="og:type"        content="my_namespace:my_object" /> 
<meta property="og:title"       content="some string" /> 
<meta property="og:description" content="some other string" /> 
<meta property="og:image"       content="some_image_url" />
<meta property="og:locale"      content="en_US" />
<meta property="og:url"         content="page_url" />   

Rails 提交 url 的代码,从 main_page 触发:

begin
    fb_post = RestClient.post 'https://graph.facebook.com/me/my_namespace:do', :access_token=>user.get_facebook_auth_token, :my_object=>"page_url"
rescue StandardError => e
    p 'e.response is'
    p e.response
end

输出

2011-11-02T02:42:14+00:00 app[web.1]: "e.response is"
2011-11-02T02:42:14+00:00 app[web.1]: "{\"error\":{\"message\":\"(#3502) Object at URL page_url has og:type of 'website'. The property 'my_object' requires an object of og:type 'my_namespace:my_object'.\",\"type\":\"OAuthException\"}}"

真正奇怪的是,在收到此错误后,如果我在 对象调试器,它通过,没有任何错误/警告,og:type 是正确的类型和注释'website',然后运行与上面相同的 Rails 代码即可正常工作

我在没有 og:url 标签的情况下尝试过,并且发生了同样的事情。

更新:

根据 Igy 的回答,我尝试将对象抓取过程与操作创建过程分开。因此,在为全新对象提交操作之前,我在对象,带有 scrape=true

begin
    p 'doing fb_update'
    fb_update = RestClient.post 'https://graph.facebook.com', :id=>page_url, :scrape => true
    p 'fb_update is'
    p fb_update
rescue StandardError => e
    p 'e.response is'
    p e.response
end

输出

2011-11-05T13:27:40+00:00 app[web.1]: "doing fb_update"
2011-11-05T13:27:50+00:00 app[web.1]: "fb_update is"
2011-11-05T13:27:50+00:00 app[web.1]: "{\"url\":\page_url,\"type\":\"website\",\"title\":\page_url,\"updated_time\":\"2011-11-05T13:27:50+0000\",\"id\":\id_here}"

奇怪的是,类型是 website,标题是页面的 url。再次,我检查了 HTML 和 Facebook 调试器,类型和标题都是正确的。

I want to allow a user of my web-app to be able to post multiple objects to their timeline from one page (main_page).

I already have the user's access token stored.

Tags on page I am trying to submit, the url is page_url:

<meta property="fb:app_id"      content="my_app_id" /> 
<meta property="og:type"        content="my_namespace:my_object" /> 
<meta property="og:title"       content="some string" /> 
<meta property="og:description" content="some other string" /> 
<meta property="og:image"       content="some_image_url" />
<meta property="og:locale"      content="en_US" />
<meta property="og:url"         content="page_url" />   

Rails code to submit the url, triggered from main_page:

begin
    fb_post = RestClient.post 'https://graph.facebook.com/me/my_namespace:do', :access_token=>user.get_facebook_auth_token, :my_object=>"page_url"
rescue StandardError => e
    p 'e.response is'
    p e.response
end

Output

2011-11-02T02:42:14+00:00 app[web.1]: "e.response is"
2011-11-02T02:42:14+00:00 app[web.1]: "{\"error\":{\"message\":\"(#3502) Object at URL page_url has og:type of 'website'. The property 'my_object' requires an object of og:type 'my_namespace:my_object'.\",\"type\":\"OAuthException\"}}"

The really weird thing is that, after getting this error, if I test the page_url on the Object Debugger, it passes without any errors/warnings, the og:type is the correct type and note 'website', and then running the same Rails code as above will work fine.

I have tried it without the og:url tag and the same thing happens.

UPDATE:

As per Igy's answer, I tried seperating the object scraping process from the action creating process. So, before the action was submitted for a brand new object, I ran an update on a the object, with scrape=true.

begin
    p 'doing fb_update'
    fb_update = RestClient.post 'https://graph.facebook.com', :id=>page_url, :scrape => true
    p 'fb_update is'
    p fb_update
rescue StandardError => e
    p 'e.response is'
    p e.response
end

Output

2011-11-05T13:27:40+00:00 app[web.1]: "doing fb_update"
2011-11-05T13:27:50+00:00 app[web.1]: "fb_update is"
2011-11-05T13:27:50+00:00 app[web.1]: "{\"url\":\page_url,\"type\":\"website\",\"title\":\page_url,\"updated_time\":\"2011-11-05T13:27:50+0000\",\"id\":\id_here}"

The odd thing is that the type is website, and the title is the page's url. Again, I have checked both in the HTML and the Facebook debugger, and the type and title are both correct in those.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

疯狂的代价 2024-12-20 05:52:02

我遇到了同样的问题。

我能够成功发布已定义的自定义对象类型的操作的唯一方法是使用 对象调试器 首先,然后通过我的应用程序发布对该对象的操作。

即使使用 linter API(Facebook 在此处建议使用)也会给出我一个错误。

curl -X POST \
     -F "id=my_custom_object_url" \
     -F "scrape=true" \
     "https://graph.facebook.com"

似乎只有调试器工具才能真正正确地抓取页面。

请注意,当使用预定义的对象类型(例如“网站”)时,我没有遇到此问题:

<meta property="og:type" content="website" />

由于某种原因,此问题似乎仅影响自定义对象类型。

更新(包含解决方案):

我终于弄清楚了。该问题实际上是由于我的应用程序无法处理两个同时发生的 HTTP 请求而引起的。 (仅供参考:我正在使用 Heroku 部署我的 Rails 应用程序。)当您向 Facebook API 发出请求以在对象 URL 上发布操作(请求 #1)时,Facebook 将立即尝试抓取您指定的对象 URL(请求#2),并根据它能够成功抓取的内容,返回对原始请求的响应。如果我同步运行请求 #1,这将占用 Heroku 上的 Web 进程,从而使我的应用程序无法同时处理请求 #2。换句话说,Facebook无法成功访问它需要抓取的对象URL;相反,它返回一些默认值,包括对象类型“website”。有趣的是,即使我在 Heroku 上启动多个 Web 进程,也会发生这种情况。该应用程序旨在使用相同的 Web 进程来处理这两个请求。

我通过将所有 Facebook API 请求作为后台作业处理(使用 delayed_job)解决了这个问题。在 Heroku 上,这需要启动至少一个 Web 进程和一个工作进程。如果可以的话,在后台运行 API 请求无论如何都是一个好主意,因为它不会将您的网站与用户联系起来,让他们等待几秒钟才能执行任何操作。

顺便说一句,我建议运行两个后台作业。第一个应该简单地通过 POSTing 来抓取对象 URL:
https://graph.facebook.com?id={object_url}&scrape=true

一旦第一个作业已成功完成,启动另一个后台作业以将操作发布到时间线:
https://graph.facebook.com/me/{app_namespace}:{action_name}?access_token= {user_access_token}

最近的更新

根据评论中的建议,使用 Unicorn 也可以实现这一目的,而无需elaided_job。如果您使用 Heroku,请在此处查看更多信息:
http://blog.railsonfire.com/2012/05/06 /Unicorn-on-Heroku.html

I'm running into the same issue.

The only way I've been able to successfully publish actions for custom object types that I've defined is to manually test the object url with Object Debugger first, and then post the action on that object through my application.

Even using the linter API -- which Facebook suggests here -- gives me an error.

curl -X POST \
     -F "id=my_custom_object_url" \
     -F "scrape=true" \
     "https://graph.facebook.com"

Only the debugger tool seems to actually scrape the page correctly.

Note that I didn't have this problem when using a pre-defined object type, such as "website":

<meta property="og:type" content="website" />

This problem only seems to affect custom object types for some reason.

UPDATE (WITH SOLUTION):

I finally figured this out. The problem actually arose from my application's inability to handle two simultaneous HTTP requests. (FYI: I'm using Heroku to deploy my Rails application.) When you make a request to the Facebook API to publish an action on an object URL (request #1), Facebook will immediately attempt to scrape the object URL you specified (request #2), and based on what it is able to scrape successfully, it returns a response to the original request. If I'm running request #1 synchronously, that will tie up my web process on Heroku, making it impossible for my application to handle request #2 at the same time. In other words, Facebook can't successfully access the object URL that it needs to scrape; instead, it returns some default values, including the object type "website". Interestingly, this occurred even when I fired up multiple web processes on Heroku. The application was intent on using the same web process to handle both requests.

I solved the problem by handling all Facebook API requests as background jobs (using delayed_job). On Heroku, this requires firing up at least one web process and one worker process. If you can do it, running API requests in the background is a good idea anyway since it doesn't tie up your website for users, making them wait several seconds before they can do anything.

By the way, I recommend running two background jobs. The first one should simply scrape the object URL by POSTing to:
https://graph.facebook.com?id={object_url}&scrape=true

Once the first job has completed successfully, fire up another background job to POST an action to the timeline:
https://graph.facebook.com/me/{app_namespace}:{action_name}?access_token={user_access_token}

MORE RECENT UPDATE:

Per the suggestion in the comments, using Unicorn will also do the trick without the need for delayed_job. See more here if you're using Heroku:
http://blog.railsonfire.com/2012/05/06/Unicorn-on-Heroku.html

看轻我的陪伴 2024-12-20 05:52:02

对象创建文档说它应该在你第一次使用时抓取一个对象采取行动反对它,但也说

在某些托管和开发平台中,您创建对象并同时发布到 Facebook,您可能会收到一条错误消息,指出该对象不存在。这是由于某些系统中存在竞争条件造成的。

我们建议您 (a) 在发布操作之前验证对象是否已复制,或 (b) 引入一个小的延迟来解决复制延迟(例如 15-30 秒)。

基于此,我认为您需要将 &scrape=true 添加到初始调用中,以强制立即抓取,然后尝试稍后创建操作。 (我相信您收到的错误消息可能是因为该页面尚未被缓存/抓取。)

The Object creation documents say it should scrape an object the first time you create an action against it but also say

In some hosting and development platforms where you create an object and publish to Facebook simultaneously, you may get an error saying that the object does not exist. This is due to a race condition that exists in some systems.

We recommend that you (a) verify the object is replicated before you post an action or (b) introduce a small delay to account for replication lag (e.g, 15-30 seconds).

Based on that, I think you need to add &scrape=true to the initial call in order to force an immediate scrape, then try to create the action a while later. (I believe the error message you're getting is probably because the page hasn't been cached/scraped yet.)

我爱人 2024-12-20 05:52:02

据我所知, Facebook 调试器页面 是最好的(并且对于最实用的)强制刷新给定页面 OpenGraph 信息的 Facebook 缓存的方法。否则,您将花费长达一周的时间来等待他们已抓取的页面的缓存信息。

基本上,您应该

  1. 根据需要重写页面,
  2. 将相关 URL 传递到调试器页面(以查看它们是否验证,刷新缓存),然后
  3. 允许提供页面“正常”查看您的行动变化。

可能还有其他方法可以强制 Facebook 缓存过期;请参阅此 Stackoverflow 页面了解一些可能的解决方案。我还没有尝试过,但它们可能会有所帮助。

From what I've seen, the Facebook Debugger Page is the best (and, for most practical purposes, only) way to force Facebook's caches of a given page's OpenGraph information to refresh. Otherwise, you'll spend up to a week waiting their cached information about pages they've already scraped.

Basically, you should

  1. Rewrite your pages to act as you desire
  2. Pass the relevant URLs to the Debugger page (to see that they validate, and to refresh the caches), and then
  3. Allow the pages to be served up "normally" to see your changes in action.

There may be other ways to force the Facebook caches to expire; see this Stackoverflow page for some possible solutions. I haven't tried them yet, but they may be helpful.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文