API前端架构
我想构建一个作为 API 前端的网站。基本上,前端接受一些用户输入并使用它来查询 API 并检索结果。检索结果后,将显示它们。这一切都是异步完成的。
我的问题是:
- 我是否使用像 Celery 这样的东西来处理后台的 API 查询?
- 假设我使用 AJAX(我会这样做),我是否有不同的 URL 来处理用户输入和检索查询结果?
- 我是否使用长轮询之类的方法来通过 JavaScript 获取并显示结果?
- 考虑到检索结果可以进一步过滤,我正在考虑使用memcached作为存储。这合适吗?
如果我错过了什么,或者有更好的方法,我很高兴听到。
编辑:我意识到我以错误的方式解释了要求,所以我会尝试改写。 基本上,我的网站基于我无法控制的 API。因此,有第 3 方 API,我的应用程序在后端使用 Django,在前端使用 JavaScript、CSS 和 HTML。
这就是我将芹菜引入其中的原因。在我看来,应用程序的流程是这样的。用户在我的网页上输入所需的信息,当用户提交数据时,该数据会异步发送到我的后端。现在,Celery 用于向第 3 方 API 发送请求并检索数据。与此同时,我的前端不断轮询我的后端以获取数据,并在收到数据时开始显示数据。
I want to build a web site that is a front-end for an API. Basically, the front-end takes some user input and uses that to query the API and retrieve results. When the results are retrieved, they are displayed. This is all done asynchronously.
My questions are:
- Do I use something like Celery to handle the API querying in the background?
- Assuming I use AJAX (which I will), do I have different URLs for handling user input and retrieval of the query results?
- Do I use something like long polling to get and display the results with JavaScript?
- Considering that the retrieved results can be further filtered, I'm considering using memcached as a storage. Is this appropriate?
If there's anything I've missed, or if there's a better approach, I'd be glad to hear about it.
Edit: I realized that I explained the requirements in a wrong way, so I'll try to reword.
Basically, my website is based on an API I have no control over. So, there's the 3rd-party API, my application with Django in the back-end and the front-end with JavaScript, CSS and HTML.
This is the reason why I introduced Celery into the mix. The flow of the application, in my mind, is like this. The user enters the required information on my web page and, when the user submits the data, that is sent to my back-end asynchronously. Now, Celery is used to send a request to the 3rd-party API and retrieve data. Meanwhile, my front-end keeps polling my back-end for the data and starts displaying it as it receives it.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
你在这里遗漏了一些要点。根据您的要求,我假设您想要构建一个查询(RESTful)API 的前端(html + javascript),它只是一个“讲”HTTP 的服务器。
因此,您的 API 查询与您的服务器有关,即与 Apache 上的 Django 或 Tornado 或.. Celery 无法“处理”您的查询,但它对于后台任务可能很有用。
在客户端,您的 AJAX 调用将触发一些映射到某些 URL 的服务器端视图。如何定义它们取决于您。查看一些流行的 API (twitter),了解它们的结构。
长轮询与您的“业务”逻辑有关,它定义了一种稍后如何向客户端呈现数据的方法。
此外,缓存与服务器端性能有关,我们鼓励您使用 memcached 或 redis 之类的东西。
编辑(用于编辑):您的方法没有任何问题。 Celery 是从外部 API 获取数据、然后将结果保存到数据库并当然使用一些缓存的正确工具。然后从客户端进行一些轮询以获得结果。但是,存在一种更优化、非阻塞、优雅的方式来完成同样的事情。您可以使用 Tornado 从外部 API 获取数据,当这些数据准备就绪时,将它们发送到客户端。没有 Celery,没有长轮询。 这里有一个很棒的代码片段。
You're missing some points here. By your requirements I assume that you want to build a frontend (html + javascript) that queries a (RESTful) API, which will simply be a server that "speaks" HTTP.
So, your API querying has to do with your server, that is, with Django on Apache or with Tornado or.. Celery cannot "handle" your queries, but it can be useful for background tasks.
On the client-side, your AJAX calls shall trigger some server-side views that are mapped to some URLS. How you will define them is up to you. Have a look at some popular APIs ( twitter ) to see how they are structured.
Long-polling has to do with your "business" logic, and it defines a way on how to present data to the client, later.
Also, caching has to do with your server-side performance, and you are encouraged to use something like memcached or redis.
EDIT (for the edit): There is nothing wrong with your approach. Celery is the right tool to fetch data from an external API, then save the results to a database and of course use some caching. Then do some polling from the client to get the results. But, there exists a more optimal, non-blocking, elegant way of doing the same. You can use Tornado to fetch data from the external API, and when those data are ready, send them to the client. No Celery, no long-polling. A great code snippet here.