Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
251 views
in Technique[技术] by (71.8m points)

javascript - Followers - mongodb database design

So I'm using mongodb and I'm unsure if I've got the correct / best database collection design for what I'm trying to do.

There can be many items, and a user can create new groups with these items in. Any user may follow any group!

enter image description here

I have not just added the followers and items into the group collection because there could be 5 items in the group, or there could be 10000 (and the same for followers) and from research I believe that you should not use unbound arrays (where the limit is unknown) due to performance issues when the document has to be moved because of its expanding size. (Is there a recommended maximum for array lengths before hitting performance issues anyway?)

I think with the following design a real performance issue could be when I want to get all of the groups that a user is following for a specific item (based off of the user_id and item_id), because then I have to find all of the groups the user is following, and from that find all of the item_groups with the group_id $in and the item id. (but I can't actually see any other way of doing this)

Follower
.find({ user_id: "54c93d61596b62c316134d2e" })
.exec(function (err, following) {
  if (err) {throw err;};

  var groups = [];

  for(var i = 0; i<following.length; i++) {
    groups.push(following[i].group_id)
  }

  item_groups.find({
  'group_id': { $in: groups },
  'item_id': '54ca9a2a6508ff7c9ecd7810'
  })
  .exec(function (err, groups) {
    if (err) {throw err;};

    res.json(groups);

  });

})

Are there any better DB patterns for dealing with this type of setup?

UPDATE: Example use case added in comment below.

Any help / advice will be really appreciated.

Many Thanks, Mac

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I agree with the general notion of other answers that this is a borderline relational problem.

The key to MongoDB data models is write-heaviness, but that can be tricky for this use case, mostly because of the bookkeeping that would be required if you wanted to link users to items directly (a change to a group that is followed by lots of users would incur a huge number of writes, and you need some worker to do this).

Let's investigate whether the read-heavy model is inapplicable here, or whether we're doing premature optimization.

The Read Heavy Approach

Your key concern is the following use case:

a real performance issue could be when I want to get all of the groups that a user is following for a specific item [...] because then I have to find all of the groups the user is following, and from that find all of the item_groups with the group_id $in and the item id.

Let's dissect this:

  • Get all groups that the user is following

    That's a simple query: db.followers.find({userId : userId}). We're going to need an index on userId which will make the runtime of this operation O(log n), or blazing fast even for large n.

  • from that find all of the item_groups with the group_id $in and the item id

    Now this the trickier part. Let's assume for a moment that it's unlikely for items to be part of a large number of groups. Then a compound index { itemId, groupId } would work best, because we can reduce the candidate set dramatically through the first criterion - if an item is shared in only 800 groups and the user is following 220 groups, mongodb only needs to find the intersection of these, which is comparatively easy because both sets are small.

We'll need to go deeper than this, though:

The structure of your data is probably that of a complex network. Complex networks come in many flavors, but it makes sense to assume your follower graph is nearly scale-free, which is also pretty much the worst case. In a scale free network, a very small number of nodes (celebrities, super bowl, Wikipedia) attract a whole lot of 'attention' (i.e. have many connections), while a much larger number of nodes have trouble getting the same amount of attention combined.

The small nodes are no reason for concern, the queries above, including round-trips to the database are in the 2ms range on my development machine on a dataset with tens of millions of connections and > 5GB of data. Now that data set isn't huge, but no matter what technology you choose you, will be RAM bound because the indices must be in RAM in any case (data locality and separability in networks is generally poor), and the set intersection size is small by definition. In other words: this regime is dominated by hardware bottlenecks.

What about the supernodes though?

Since that would be guesswork and I'm interested in network models a lot, I took the liberty of implementing a dramatically simplified network tool based on your data model to make some measurements. (Sorry it's in C#, but generating well-structured networks is hard enough in the language I'm most fluent in...).

When querying the supernodes, I get results in the range of 7ms tops (that's on 12M entries in a 1.3GB db, with the largest group having 133,000 items in it and a user that follows 143 groups.)

The assumption in this code is that the number of groups followed by a user isn't huge, but that seems reasonable here. If it's not, I'd go for the write-heavy approach.

Feel free to play with the code. Unfortunately, it will need a bit of optimization if you want to try this with more than a couple of GB of data, because it's simply not optimized and does some very inefficient calculations here and there (especially the beta-weighted random shuffle could be improved).

In other words: I wouldn't worry about the performance of the read-heavy approach yet. The problem is often not so much that the number of users grows, but that users use the system in unexpected ways.

The Write Heavy Approach

The alternative approach is probably to reverse the order of linking:

UserItemLinker
{
 userId,
 itemId,
 groupIds[]  // for faster retrieval of the linker. It's unlikely that this grows large
}

This is probably the most scalable data model, but I wouldn't go for it unless we're talking about HUGE amounts of data where sharding is a key requirement. The key difference here is that we can now efficiently compartmentalize the data by using the userId as part of the shard key. That helps to parallelize queries, shard efficiently and improve data locality in multi-datacenter-scenarios.

This could be tested with a more elaborate version of the testbed, but I didn't find the time yet, and frankly, I think it's overkill for most applications.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...