无法从服务器端检索搜索结果:使用 Python 的 Facebook Graph API
我正在自己做一些简单的Python + FB Graph 训练,我遇到了一个奇怪的问题:
import time
import sys
import urllib2
import urllib
from json import loads
base_url = "https://graph.facebook.com/search?q="
post_id = None
post_type = None
user_id = None
message = None
created_time = None
def doit(hour):
page = 1
search_term = "\"Plastic Planet\""
encoded_search_term = urllib.quote(search_term)
print encoded_search_term
type="&type=post"
url = "%s%s%s" % (base_url,encoded_search_term,type)
print url
while(1):
try:
response = urllib2.urlopen(url)
except urllib2.HTTPError, e:
print e
finally:
pass
content = response.read()
content = loads(content)
print "=================================="
for c in content["data"]:
print c
print "****************************************"
try:
content["paging"]
print "current URL"
print url
print "next page!------------"
url = content["paging"]["next"]
print url
except:
pass
finally:
pass
"""
print "new URL is ======================="
print url
print "=================================="
"""
print url
我在这里想做的是自动分页搜索结果, 但尝试 content["paging"]["next"]
但奇怪的是没有返回数据;我收到以下信息:
{"data":[]}
即使在第一个循环中。
但是当我将 URL 复制到浏览器中时,返回了很多结果。
我还尝试了带有访问令牌的版本,并且发生了同样的事情。
+++++++++++++++++++编辑和简化++++++++++++++++++++
好的,感谢 TryPyPy,这是我上一个问题的简化和编辑版本:
为什么是这样:
import urllib2
url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000"
response = urllib2.urlopen(url)
print response.read()
导致 {"data “:[]}
?
但是同一个 url 在浏览器中会产生大量数据?
I'm doing some simple Python + FB Graph training on my own, and I faced a weird problem:
import time
import sys
import urllib2
import urllib
from json import loads
base_url = "https://graph.facebook.com/search?q="
post_id = None
post_type = None
user_id = None
message = None
created_time = None
def doit(hour):
page = 1
search_term = "\"Plastic Planet\""
encoded_search_term = urllib.quote(search_term)
print encoded_search_term
type="&type=post"
url = "%s%s%s" % (base_url,encoded_search_term,type)
print url
while(1):
try:
response = urllib2.urlopen(url)
except urllib2.HTTPError, e:
print e
finally:
pass
content = response.read()
content = loads(content)
print "=================================="
for c in content["data"]:
print c
print "****************************************"
try:
content["paging"]
print "current URL"
print url
print "next page!------------"
url = content["paging"]["next"]
print url
except:
pass
finally:
pass
"""
print "new URL is ======================="
print url
print "=================================="
"""
print url
What I'm trying to do here is to automatically page through the search results,
but trying for content["paging"]["next"]
But the weird thing is that no data is returned; I received the following:
{"data":[]}
Even in the very first loop.
But when I copied the URL into a browser, a lot of results were returned.
I've also tried a version with my access token and th same thing happens.
+++++++++++++++++++EDITED and SIMPLIFIED++++++++++++++++++
ok thanks to TryPyPy, here's the simplified and edited version of my previous question:
Why is that:
import urllib2
url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000"
response = urllib2.urlopen(url)
print response.read()
result in {"data":[]}
?
But the same url produces a lot of data in a browser?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
使用 Chrome(我得到了大量数据)和 Firefox(我得到空响应)的试验和错误使我的“Accept-Language”标头为零。其他修改据说只是装饰性的,但我不确定 CookieJar。
这是一个经过清理的最小工作版本:
Trial and error using Chrome (where I got lots of data) and Firefox (where I got the empty response) made me zero on the 'Accept-Language' header. Other modifications are supposedly only cosmetic, but I'm not sure about the CookieJar.
Here's a cleaned up, minimal working version: