如何优化 Django 的分页器模块

发布于 2024-10-11 01:00:40 字数 423 浏览 0 评论 0原文

我有一个关于 Django 分页器模块如何工作以及如何优化它的问题。我从互联网上的不同 API 获得的信息中列出了大约 300 项信息。我正在使用 Django 的分页器模块为访问者显示列表,一次显示 10 个项目。分页效果没有达到我想要的效果。看来分页器必须先获取所有 300 个项目,然后才能拉出每次更改页面时需要显示的 10 个项目。例如,如果有 30 个页面,那么转到第 2 页需要我的网站再次查询 API,将所有信息放入列表中,然后访问访问者浏览器请求的 10 个页面。我不想继续在 API 中查询我在每次翻页时已经拥有的相同信息。

现在,我的视图有一个函数,可以查看 get 请求并根据查询查询 API 信息。然后它将所有这些信息放入列表中并将其传递到模板文件中。因此,每当有人翻页时,该函数总是会加载,从而导致再次查询 API。

我应该如何解决这个问题?

感谢您的帮助。

I have a question about how Django's paginator module works and how to optimize it. I have a list of around 300 items from information that I get from different APIs on the internet. I am using Django's paginator module to display the list for my visitors, 10 items at a time. The pagination does not work as well as I want it to. It seems that the paginator has to get all 300 items before pulling out the ten that need to be displayed each time the page is changed. For example, if there are 30 pages, then going to page 2 requires my website to query the APIs again, put all the information in a list, and then access the ten that the visitor's browser requests. I do not want to keep querying the APIs for the same information that I already have on each page turn.

Right now, my views has a function that looks at the get request and queries the APIs for information based on the query. Then it puts all that information into a list and passes it onto the template file. So, this function always loads whenever someone turns the page, resulting in querying the APIs again.

How should I fix this?

Thank you for your help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

呆头 2024-10-18 01:00:40

在这种情况下,分页器需要完整的列表才能完成其工作。

我的建议是定期更新提要的缓存,然后使用该缓存作为分页器模块的输入。对每个请求执行密集或长时间的任务总是一个坏主意。如果不是用户将经历的页面加载时间,请考虑您的服务器是否存在易受攻击的漏洞。

您可能想查看 Django 的低级缓存API 允许您将提要结果存储在密钥下的全局可访问位置,稍后您可以使用它来检索缓存并为每个页面请求进行分页。

The paginator will in this case need the full list in order to do its job.

My advice would be to update a cache of the feeds at a regular interval, and then use that cache as the input to the paginator module. Doing an intensive or length task on each and every request is always a bad idea. If not for the page load times the user will experience, think of the vulnerability of your server to attack.

You may want to check out Django's low level cache API which would allow you to store the feed result in a globally accessible place under a key, which you can later use to retrieve the cache and paginate for each page request.

场罚期间 2024-10-18 01:00:40

ORM 在选择行之前不会加载数据:

query_results = Foo(id=1) # No sql executed yet, just stored.

foo = query_results[0] # now it fires

或者

for foo in query_results:
   foo.bar() # sql fires

如果您使用在初始化时加载结果的自定义数据源,则分页将无法按预期工作,因为所有提要都会立即获取。您可能需要子类化 __getitem____iter__ 来执行实际的提取。然后它将与 Django 期望加载结果的方式一致。

分页需要知道有多少结果才能执行 has_next() 等操作。在 sql 中,获取带有索引的 count(*) 通常很便宜。所以你也想知道会有多少结果(或者只是估计一下是否太昂贵而无法准确知道)。

ORM's do not load data until the row is selected:

query_results = Foo(id=1) # No sql executed yet, just stored.

foo = query_results[0] # now it fires

or

for foo in query_results:
   foo.bar() # sql fires

If you are using a custom data source that is loading results on initialization then the pagination will not work as expected since all feeds will be fetched at once. You may want to subclass __getitem__ or __iter__ to do the actual fetch. It will then coincide with the way Django expects the results to be loaded.

Pagination is going to need to know how many results there are to do things like has_next(). In sql it is usually inexpensive to get a count(*) with an index. So you would also, want to have know how many results there would be (or maybe just estimate if it too expensive to know exactly).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文