将 2 个表左连接到 1 个表上
这一定很简单,但我想不出任何解决方案,也无法在某处找到答案......
我得到了表“用户”
和一张表“博客”(user_id,博客文章)
和一个表“messages”(user_id,message)
我希望得到以下结果:
User | count(blogs) | count(messages)
Jim | 0 | 3
Tom | 2 | 3
Tim | 0 | 1
Foo | 2 | 0
所以我所做的是:
SELECT u.id, count(b.id), count(m.id) FROM `users` u
LEFT JOIN blogs b ON b.user_id = u.id
LEFT JOIN messages m ON m.user_id = u.id
GROUP BY u.id
它显然不起作用,因为第二个左连接与博客而不是用户相关。有什么建议吗?
It must be pretty easy, but i can't think of any solution nor can I find an answer somewhere...
I got the table 'users'
and one table 'blogs' (user_id, blogpost)
and one table 'messages' (user_id, message)
I'd like to have the following result:
User | count(blogs) | count(messages)
Jim | 0 | 3
Tom | 2 | 3
Tim | 0 | 1
Foo | 2 | 0
So what I did is:
SELECT u.id, count(b.id), count(m.id) FROM `users` u
LEFT JOIN blogs b ON b.user_id = u.id
LEFT JOIN messages m ON m.user_id = u.id
GROUP BY u.id
It obviously doesn't work, because the second left join relates to blogs not users. Any suggestions?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
首先,如果您只想要计数值,您可以执行子选择:
请注意,这只是一个普通的 sql 示例,我现在没有 mysql 数据库来测试它。
另一方面,您可以进行联接,但您应该使用
外部联接
来包含没有博客但有消息的用户。这意味着您会多次获得多个用户,因此分组依据会很有帮助。First, if you only want the count value, you could do subselects:
Note that this is just a plain sql example, I have no mysql db here to test it right now.
On the other hand, you could do a join, but you should use an
outer join
to include users without blogs but with messages. That would imply that you get several users multiple times, so a group by would be helpful.如果您在选择中使用聚合函数,SQL 会将所有行折叠成一行。
为了获取超过 1 行,您必须使用
group by
子句。然后 SQL 将生成每个用户的总计。
最快的选项
为什么你的代码不起作用
计数将会关闭,因为你正在计算重复的项目。
简单修复,但会比选项 1 慢
如果您只计算不同的 id,您将得到正确的计数:
If you use an aggregate function in a select, SQL will collapse all your rows into a single row.
In order to get more than 1 row out you must use a
group by
clause.Then SQL will generate totals per user.
Fastest option
Why you code does not work
The count will be off, because you are counting duplicate items.
Simple fix, but will be slower than option 1
If you only count distinct id's, you will get the correct counts: