需要复杂 mysql 分页查询逻辑的帮助

发布于 2024-09-25 07:12:59 字数 1602 浏览 0 评论 0原文

好吧,在这一点上,我对如何有效地构建我的分页系统感到非常困惑。问题是该系统不典型,并且不会按固定间隔发送,例如每页 10 条。问题是,消息可以有回复,因此共享相同的reply_chunk_id (reply_id)。我不希望消息被切断,但这样做似乎越来越复杂。

我检索消息的初始查询是这样的,它以分组形式检索消息,无论回复位于表中的哪个位置,它都会根据

SELECT m.timestamp, m.user, m.message, g.grp_timestamp,m.reply_chunk_id
  FROM shoutbox AS m JOIN
       (SELECT reply_chunk_id, MIN(timestamp) AS grp_timestamp
          FROM shoutbox
         GROUP BY reply_chunk_id) AS g ON m.reply_chunk_id = g.reply_chunk_id
 WHERE m.topic_id = ? 
 ORDER BY g.grp_timestamp DESC, g.reply_chunk_id, m.timestamp DESC limit ?, ?

我所想的 时间戳,与具有相同reply_chunk_id的相应消息按降序排列一个接一个地分组出来我需要另一个查询来检索此查询的限制参数,因为页面不会按固定间隔运行,因此需要唯一的起点和终点,具体取决于您选择的页面。我正在考虑选择一个限制(例如 $limit 10)作为示例,然后选择最接近的四舍五入的消息数。例如,如果您看到第 10 条消息,并且该组还有 2 条回复,那么您将检索到 12 条消息。

我遇到的问题是构造此限制检索查询的逻辑。我必须以某种方式从该主题中的第一条消息开始,计算它的所有回复,转到第二条消息,计算所有回复,直到达到四舍五入的数字,然后输出它。

真正的麻烦来了,当你想换页时,你会如何转移到上一页的终点,或者你可能会跳过一页,直接从第1页到第3页。答案是你不能,所以你必须每次从该主题的第一条消息开始,计算所有回复,继续对每条消息执行相同的操作,直到达到四舍五入的数字,以某种方式表明你已经通过了第一页,继续前进,直到到达您想要的页面消息。我真的不确定如何做到这一点,或者这是否是最好的方法,因此非常感谢任何帮助或建议。

桌子设计

CREATE TABLE IF NOT EXISTS `shoutbox` (
  `id` int(5) NOT NULL AUTO_INCREMENT,
  `timestamp` int(11) NOT NULL,
  `user` varchar(25) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL 
         DEFAULT 'anonimous',
  `message` varchar(2000) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
  `topic_id` varchar(35) NOT NULL,
  `reply_chunk_id` varchar(35) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ;

Ok at this point, I am pretty confused on how to construct my pagination system in an efficient manner. The problem is that the system is not typical and does not go in regular intervals such as 10 per page. The problem is, messages can have replies and thus share the same reply_chunk_id (reply_id). I do not want messages to get cut off but it is seeming increasingly complex to do so.

My initial query to retrieve message is this and it retrieves messages as a grouping no matter where a reply is in the table, it will come out grouped with its corresponding messages with the same reply_chunk_id one after another in descending order based on timestamp

SELECT m.timestamp, m.user, m.message, g.grp_timestamp,m.reply_chunk_id
  FROM shoutbox AS m JOIN
       (SELECT reply_chunk_id, MIN(timestamp) AS grp_timestamp
          FROM shoutbox
         GROUP BY reply_chunk_id) AS g ON m.reply_chunk_id = g.reply_chunk_id
 WHERE m.topic_id = ? 
 ORDER BY g.grp_timestamp DESC, g.reply_chunk_id, m.timestamp DESC limit ?, ?

I was thinking that I would need another query to retrieve the limit parameters for this query since the pages do not go in regular intervals and thus need unique starting and ending points depending on what page you have selected. I was thinking of picking a limit say $limit of 10 just as an example, and then going to the nearest rounded number of messages over that. So for example, if you got to the 10th message and there were 2 more replies for that group, you would retrieve 12 messages.

The trouble I am having is with the logic of constructing this limit retrieving query. I would have to somehow start at the first message in that topic, count all replies for it, go to the second, count all replies until the rounded number is reached and then output it.

The real trouble comes when say you want to change the page, how would you transfer over the ending point of the previous page, or maybe you skip a page and go straight from 1-3rd page. The answer is you cannot so you would have to start from the first message for that topic each time, count all replies, move on and do the same for each message until you reach your rounded number, somehow indicate you have passed the first page, and move on until you get to the page messages you desire. I am really not sure how to do this, or if this is the best way so any help or advice whatsoever is really appreciated.

TABLE DESIGN

CREATE TABLE IF NOT EXISTS `shoutbox` (
  `id` int(5) NOT NULL AUTO_INCREMENT,
  `timestamp` int(11) NOT NULL,
  `user` varchar(25) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL 
         DEFAULT 'anonimous',
  `message` varchar(2000) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
  `topic_id` varchar(35) NOT NULL,
  `reply_chunk_id` varchar(35) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ;

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

山田美奈子 2024-10-02 07:12:59

编辑:有一些时间来测试,新检查的单一查询解决方案

SELECT timestamp, user, message, grp_timestamp,reply_chunk_id
FROM (
  SELECT totalresult.*,
    @page := IF(
        (
          @subcounter >= @perpage                    -- if we have more then @perpage
          AND
          reply_chunk_id != @old_chunk_id            -- AND we are on a reply_chunk border
        )
        OR
        (
          @subcounter >= @absolutemax                -- the upper maximum is reached
        )
        OR 
        (
          (@subcounter + grp_messagecount > @absolutemax) -- next replychunk would put us over the top
          AND 
          (grp_messagecount <= @absolutemax)              -- next replyhunk would fit in a single pagenumber
          AND
          (@subcounter >= @allowprematurebreak)           -- and we have enough items to qualify for a page
        ),
      @page + 1 + (@subcounter:=0),                 -- increment page and reset counter
      @page) as page,                               -- otherwise keep the same @page
    @subcounter := @subcounter + 1      as counter, -- increment counter
    @old_chunk_id := reply_chunk_id as reply_chunk  -- store previous reply chunk
  FROM (
    SELECT 
        m.timestamp, m.user, m.message, g.grp_timestamp,m.reply_chunk_id, g.grp_messagecount
    FROM shoutbox AS m
    JOIN (
          SELECT
            reply_chunk_id,
            MIN(timestamp) AS grp_timestamp,
            COUNT(*)       AS grp_messagecount
          FROM shoutbox
          GROUP BY reply_chunk_id
    ) AS g ON m.reply_chunk_id = g.reply_chunk_id
    WHERE m.topic_id = ? 
    ORDER BY g.grp_timestamp DESC, g.reply_chunk_id, m.timestamp DESC
  ) AS totalresult
  JOIN (
    SELECT
      @page                :=0,  -- the page number / counter
      @old_chunck_id       :=0,  -- placeholder for old reply_chunk so we can detect boundaries
      @subcounter          :=0,  -- counter for number of actual messages
      @perpage             :=10, -- preferred amount of messages per page
      @absolutemax         :=20, -- allow breaking in reply_chunk if absolutemax is reached
      @allowprematurebreak :=5   -- minimum of messages per page, used if we can let the 
                                 -- next chunk see whole on the next page
  ) AS void
) AS paginatedresult
WHERE page = <pagenumber>

我添加了一些设置作为变量,以便于简单和方便。参数的绑定更具可读性。关于性能:对其进行基准测试,很可能它表现良好(当然,因为它受到主题的限制)。如果没有,您的解决方案将获取此内部子查询的输出:

          SELECT
            reply_chunk_id,
            MIN(timestamp) AS grp_timestamp,
            COUNT(*)       AS grp_messagecount
          FROM shoutbox
          GROUP BY reply_chunk_id

并将确定在何处中断的逻辑移动到某些脚本逻辑中,该脚本逻辑决定要查询哪个reply_chunk_id。

edit: got some time to test, new checked single query solution

SELECT timestamp, user, message, grp_timestamp,reply_chunk_id
FROM (
  SELECT totalresult.*,
    @page := IF(
        (
          @subcounter >= @perpage                    -- if we have more then @perpage
          AND
          reply_chunk_id != @old_chunk_id            -- AND we are on a reply_chunk border
        )
        OR
        (
          @subcounter >= @absolutemax                -- the upper maximum is reached
        )
        OR 
        (
          (@subcounter + grp_messagecount > @absolutemax) -- next replychunk would put us over the top
          AND 
          (grp_messagecount <= @absolutemax)              -- next replyhunk would fit in a single pagenumber
          AND
          (@subcounter >= @allowprematurebreak)           -- and we have enough items to qualify for a page
        ),
      @page + 1 + (@subcounter:=0),                 -- increment page and reset counter
      @page) as page,                               -- otherwise keep the same @page
    @subcounter := @subcounter + 1      as counter, -- increment counter
    @old_chunk_id := reply_chunk_id as reply_chunk  -- store previous reply chunk
  FROM (
    SELECT 
        m.timestamp, m.user, m.message, g.grp_timestamp,m.reply_chunk_id, g.grp_messagecount
    FROM shoutbox AS m
    JOIN (
          SELECT
            reply_chunk_id,
            MIN(timestamp) AS grp_timestamp,
            COUNT(*)       AS grp_messagecount
          FROM shoutbox
          GROUP BY reply_chunk_id
    ) AS g ON m.reply_chunk_id = g.reply_chunk_id
    WHERE m.topic_id = ? 
    ORDER BY g.grp_timestamp DESC, g.reply_chunk_id, m.timestamp DESC
  ) AS totalresult
  JOIN (
    SELECT
      @page                :=0,  -- the page number / counter
      @old_chunck_id       :=0,  -- placeholder for old reply_chunk so we can detect boundaries
      @subcounter          :=0,  -- counter for number of actual messages
      @perpage             :=10, -- preferred amount of messages per page
      @absolutemax         :=20, -- allow breaking in reply_chunk if absolutemax is reached
      @allowprematurebreak :=5   -- minimum of messages per page, used if we can let the 
                                 -- next chunk see whole on the next page
  ) AS void
) AS paginatedresult
WHERE page = <pagenumber>

I've added some settings as variables, for easy & somewhat more readable binding of parameters. About performace: benchmark it, chances are it does fine (certainly as it's restricted on the basis of topic). If it doesn't, your solution would to get the output of this inner subquery:

          SELECT
            reply_chunk_id,
            MIN(timestamp) AS grp_timestamp,
            COUNT(*)       AS grp_messagecount
          FROM shoutbox
          GROUP BY reply_chunk_id

And move the logic of determining where to break into some script logic which decides which reply_chunk_id's to query for.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文