对关联对象模式的扩展详细信息进行高效查询
详细信息
人
有许多目标
。目标
具有有关活动
的人
特定详细信息。Activity
包含通用信息,例如世界纪录。人
可以组织事件
来尝试实现目标
。Person
通过Invitation
邀请其他Person
观看Event
。
模式
注意:示例模式图中仅列出了反向引用,用“(fk)”表示。箭头表示正常关系。
问题
我想要所有邀请
的一个人<的大部分
事件
、目标
和活动
详细信息/code> 收到(无论状态如何,但仍然需要状态)立即显示。
在我尝试解决这样的 JOIN 之前,是否有更好的方法来表示问题?我相信人
-> Invitation
<- Event
是一个关联对象模式,但我不确定如何获得目标
并且对于返回的每个邀请
,以干净、高效的方式提供活动
信息。
奖励:提供示例 SQLAlchemy 查询。
Details
- A
Person
has manyObjective
s. Objective
s havePerson
-specific details aboutActivity
s.- An
Activity
contains generic information such as a world record. - A
Person
can organize anEvent
to attempt theObjective
. - A
Person
invites otherPerson
s to watch anEvent
with anInvitation
.
Schema
Note: Only backref's are listed on the example schema diagram, indicated by "(fk)". The arrows imply the normal relationship.
Image Link Until I Get 10 Points To Use Image Tag
Question
I want most Event
, Objective
, and Activity
details for all Invitation
s one Person
received (irregardless of status, but the status is still needed) displayed at once.
Is there a better way to represent the problem before I try tackling a JOIN like this? I believe the Person
-> Invitation
<- Event
is an Association Object pattern, but I am unsure of how to get the Objective
and Activity
information in a clean, efficient manner for each Invitation
returned.
Bonus: Provide sample SQLAlchemy query.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在 SQL 方面,这非常简单。我仅使用一个 ID 号(针对人员)构建了一些用于测试的表;所有其余的键都是自然键。查看此查询的执行计划
表明,除了对 event_invitations 进行顺序扫描外,dbms (PostgreSQL) 始终使用索引。我确信那是因为我使用的数据非常少,所以所有这些表都可以轻松放入 RAM。 (当表适合 RAM 时,扫描小表通常比使用索引更快。)
优化器估计查询每个部分的成本为 0.00,您无法得到比这更好的结果。实际运行时间不到 0.2 毫秒,但这并没有多大意义。
我相信您可以将其翻译成 SQLAlchemy。如果您希望我发布表格和示例数据,请告诉我。
On the SQL side, this is pretty straightforward. I built some tables for testing using only one id number (for persons); all the rest of the keys were natural keys. Looking at the execution plan for this query
shows that the dbms (PostgreSQL) is using indexes throughout except for a sequential scan on event_invitations. I'm sure that's because I used very little data, so all these tables easily fit into RAM. (When tables fit in RAM, it's often faster to scan a small table than to use the index.)
The optimizer estimates the cost for each part of the query to be 0.00, and you can't get much better than that. Actual run time was less than 0.2 milliseconds, but that doesn't mean much.
I'm sure you can translate this into SQLAlchemy. Let me know if you want me to post my tables and sample data.