返回用户以及关注者的分页集合
我仍在学习,所以也许我的模型目前是错误的,但这就是我到目前为止所拥有的:
Account
{
string Id,
string ArtistName,
List<FollowerAccount> Followers
}
FollowerAccount
{
AccountId,
DateBeganFollowing
}
所以我的帐户文档包含一个非规范化引用列表,指向所有跟踪它们的帐户的列表。
我现在想从“accounts/1”关注者列表中返回帐户列表,但是对它们进行分页,我知道我可以将其作为 2 个查询来完成,但我希望我可以将其确定为 2 1 个查询。
这是我正在使用的索引,但是我无法让它工作。
public class TestIndex : AbstractMultiMapIndexCreationTask<TestIndex.ReduceResult>
{
public class ReduceResult
{
public string AccountId { get; set; }
public DateTimeOffset? DateBecameFollower { get; set; }
public string ParentAccountId { get; set; }
public string ArtistName { get; set; }
}
public TestIndex()
{
AddMap<Account>(followers => from follower in followers
from sub in follower.FollowersAccounts
select new
{
ParentAccountId = follower.Id,
AccountId = sub.AccountId,
DateBecameFollower = sub.DataBecameFollower,
ArtistName = (string)null
});
AddMap<Account>(accounts => from account in accounts
select new
{
ParentAccountId = (string)null,
AccountId = account.Id,
DateBecameFollower = DateTimeOffset.MinValue,
ArtistName = account.ArtistName,
});
Reduce = results => from result in results
group result by result.AccountId
into g
select new
{
ParentAccountId = g.Select(x => x.ParentAccountId).Where(x => x != null).First(),
AccountId = g.Key,
DateBecameFollower = g.Select(x => x.DateBecameFollower).Where(x => x != DateTimeOffset.MinValue).First(),
ArtistName = g.Select(x => x.ArtistName).Where(x => x != null).First()
};
}
}
I'm still learning so maybe my model is currently wrong, but this is what I have so far:
Account
{
string Id,
string ArtistName,
List<FollowerAccount> Followers
}
FollowerAccount
{
AccountId,
DateBeganFollowing
}
So my Account document contains a list of denormalized references to a list of all the accounts that are following them.
I now want to return a list of accounts from 'accounts/1' Followers list, but page them, I know I can do this as 2 queries but I was hoping I could nail this down 2 1 query.
Here is an index I have being playing around with, however I can't get it to work.
public class TestIndex : AbstractMultiMapIndexCreationTask<TestIndex.ReduceResult>
{
public class ReduceResult
{
public string AccountId { get; set; }
public DateTimeOffset? DateBecameFollower { get; set; }
public string ParentAccountId { get; set; }
public string ArtistName { get; set; }
}
public TestIndex()
{
AddMap<Account>(followers => from follower in followers
from sub in follower.FollowersAccounts
select new
{
ParentAccountId = follower.Id,
AccountId = sub.AccountId,
DateBecameFollower = sub.DataBecameFollower,
ArtistName = (string)null
});
AddMap<Account>(accounts => from account in accounts
select new
{
ParentAccountId = (string)null,
AccountId = account.Id,
DateBecameFollower = DateTimeOffset.MinValue,
ArtistName = account.ArtistName,
});
Reduce = results => from result in results
group result by result.AccountId
into g
select new
{
ParentAccountId = g.Select(x => x.ParentAccountId).Where(x => x != null).First(),
AccountId = g.Key,
DateBecameFollower = g.Select(x => x.DateBecameFollower).Where(x => x != DateTimeOffset.MinValue).First(),
ArtistName = g.Select(x => x.ArtistName).Where(x => x != null).First()
};
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
RavenDB 无法在单个文档的项目中进行分页,因为它将文档视为聚合。因此,您可以分页完整文档(使用“跳过/获取”),但不能分页单个帐户文档中的关注者。
因此,您必须求助于 2 个数据库调用,其中一个用于获取帐户,另一个用于查询关注者如何关注该帐户。
但是,如果您使用 RavenDB 的延迟请求功能,您可以节省网络往返次数。请参阅此处和此处了解更多信息。
另一件需要记住的事情是,您预计一个帐户有多少关注者?如果数量很大,您可能会遇到在 RavenDB 中存储如此大的文档的性能问题。反序列化非常大的文档的开销可能会成为一个问题。
RavenDB can't page within the items of a single document, because it treats a doc as an aggregate. So you can page complete documents (using Skip/Take), but not the Followers within a single Account document.
So you'll have to resort to 2 database calls, one to get the Account and 1 to query the Followers how are following that account.
However if you use the Lazy request feature of RavenDB you can save on network round trips. See here and here for more info.
Another thing to bear in mind, how many followers do you expect a single account to have? If it's a large amount you might run into perf issue storing documents that large in RavenDB. The overhead for deserializing very large documents can become an issue.