n顶点子图枚举的时间复杂度
我有一个算法,用于通过给定顶点创建 P 顶点上所有可能的子图的列表。它并不完美 但我认为它应该可以正常工作。问题是当我尝试计算其时间复杂度时我迷失了方向。
我想出了类似 T(p) = 2^d + 2^d * (n * T(p-1) )
的东西,其中 d=Δ(G), p=#需要顶点,n=|V|
。这确实只是一个猜测。 谁能帮我解决这个问题吗?
使用的 powerSet() 算法应为 O(2^d)
或 O(d*2^d)
。
private void connectedGraphsOnNVertices(int n, Set<Node> connectedSoFar, Set<Node> neighbours, List<Set<Node>> graphList) {
if (n==1) return;
for (Set<Node> combination : powerSet(neighbours)) {
if (connectedSoFar.size() + combination.size() > n || combination.size() == 0) {
continue;
} else if (connectedSoFar.size() + combination.size() == n) {
Set<Node> newGraph = new HashSet<Node>();
newGraph.addAll(connectedSoFar);
newGraph.addAll(combination);
graphList.add(newGraph);
continue;
}
connectedSoFar.addAll(combination);
for (Node node: combination) {
Set<Node> k = new HashSet<Node>(node.getNeighbours());
connectedGraphsOnNVertices(n, connectedSoFar, k, graphList);
}
connectedSoFar.removeAll(combination);
}
}
I have an algorithm for creating a list of all possible subgraphs on P vertices through a given vertex. It's not perfect
but I think it should be working alright. The problem is I get lost when I try to calculate its time complexity.
I conjured up something like T(p) = 2^d + 2^d * (n * T(p-1) )
, where d=Δ(G), p=#vertices required, n=|V|
. It's really just a guess.
Can anyone help me with this?
The powerSet() algorithm used should be O(2^d)
or O(d*2^d)
.
private void connectedGraphsOnNVertices(int n, Set<Node> connectedSoFar, Set<Node> neighbours, List<Set<Node>> graphList) {
if (n==1) return;
for (Set<Node> combination : powerSet(neighbours)) {
if (connectedSoFar.size() + combination.size() > n || combination.size() == 0) {
continue;
} else if (connectedSoFar.size() + combination.size() == n) {
Set<Node> newGraph = new HashSet<Node>();
newGraph.addAll(connectedSoFar);
newGraph.addAll(combination);
graphList.add(newGraph);
continue;
}
connectedSoFar.addAll(combination);
for (Node node: combination) {
Set<Node> k = new HashSet<Node>(node.getNeighbours());
connectedGraphsOnNVertices(n, connectedSoFar, k, graphList);
}
connectedSoFar.removeAll(combination);
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
看起来该算法有一个错误,因为在递归调用之后,组合中出现的节点也可能出现在connectedSoFar中,因此检查connectedSoFar.size()+combination.size()等于n似乎不正确,因为它可能会对一个节点计数两次。
无论如何,否则要分析算法,幂集中有 2d 元素; “elase”分支中的每个操作都需要 O(n) 时间,因为connectedSoFar 和组合在一起不能包含超过 n 个节点。然后向connectedSoFar 添加元素需要O(n log n) 时间,因为 |combination| ≤ n。组合节点上的迭代发生 O(n) 次;其中有 O(d) 操作来构造哈希集 k,然后递归调用。
然后用 X(n) 表示过程的复杂性,其中 n 是参数。您有
X(n) ~ 2d (n + n log n + n (d + X(n - 1)))
因为在递归调用中您已向图中添加了至少一个顶点因此实际上递归调用中的参数 n 实际上至少减少了 1。
将其简化为
X(n) ~ 2d (n (1 + d + log n + X(n - 1)))
因为 d 是常数,记为 D = 2d,消除常数1,得到
X(n) ~ D n (d + log n + X(n - 1))
,可以分析为
X(n) ~ (2d)n n! (d + log n)
表明你的算法确实很耗时:)
It looks like the algorithm has a bug because after the recursive call, it is possible that nodes that appear in combination also appear in connectedSoFar, so the check that connectedSoFar.size() + combination.size() equals n seems incorrect, as it might count a node twice.
Anyway, otherwise to analyze the algorithm, you have 2d elements in the powerset; every operation in the "elase" branch takes O(n) time because connectedSoFar and combination together can't contain more than n nodes. Adding elements to connectedSoFar then takes O(n log n) time because |combination| ≤ n. The iteration over combination nodes happens O(n) times; within it there is O(d) operation to construct the hash set k and then recursive call.
Denote then the complexity of the procedure by X(n) where n is the parameter. You have
X(n) ~ 2d (n + n log n + n (d + X(n - 1)))
because in the recursive call you have added at least one vertex to the graph so in practice the parameter n in the recursive call decreases virtually by at least one.
Simplify this to
X(n) ~ 2d (n (1 + d + log n + X(n - 1)))
because d is constant, mark D = 2d, eliminate the constant 1, and you get
X(n) ~ D n (d + log n + X(n - 1))
which you can analyze as
X(n) ~ (2d)n n! (d + log n)
showing that your algorithm is really a time hog :)