是否存在真正的 O(n^n) 算法?

发布于 2024-11-09 20:33:51 字数 370 浏览 0 评论 0原文

有没有真正的时间复杂度为 O(n^n) 的算法,而不仅仅是一个噱头?

我可以创建这样一个算法,例如在 O(n^n) / θ(n^n) 中计算 n^n:(

long n_to_the_power_of_m(int n, int m) {
    if(m == 0) return 1;
    long sum = 0;
    for(int i = 0; i < n; ++i)
        sum += n_to_the_power_of_m(n, m-1);
    return sum;
}

需要超过 4 分钟来计算 10^10)

或者其他方式:是否存在任何问题,其中不能比 O(n^n) 更好地解决吗?

Is there any real Algorithm with a time complexity O(n^n), that isn't just a gimmick?

I can create such an Algorithm, like computing n^n in O(n^n) / Θ(n^n):

long n_to_the_power_of_m(int n, int m) {
    if(m == 0) return 1;
    long sum = 0;
    for(int i = 0; i < n; ++i)
        sum += n_to_the_power_of_m(n, m-1);
    return sum;
}

(needs more than 4 minutes to compute 10^10)

Or other way around: Are there any Problems, which cannot be solved better than in O(n^n)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

離殇 2024-11-16 20:33:58

也许不是最实际的应用,但仍然很有趣:如果您绘制 n^n 个点(当然是 O(n^n)),并满足以下条件:

  • 形成一个具有 n 个顶点的多边形
  • 从每个顶点开始,形成另一个(较小的)具有 n 个顶点的多边形
  • 重复步骤 2 进行 n 次迭代
  • 仅在第 n 次迭代中绘制点,

您最终会得到一个非常整洁的分形图案,具体取决于用于减小每次迭代的多边形大小的比例因子。

如果 n = 6,这个简单的点绘制算法会生成一个图案,该图案会在点之间的负空间中绘制科赫雪花分形的许多实例(您可以在示例Processing.js 程序中看到这一点)

尝试使用不同的 n 值进行实验和比例因子(尽管由于浏览器性能限制,您可能会被限制在 n = 8 或 9 左右的上限),并且您可以获得此处描述的一些其他分形:https://en.wikipedia.org/wiki/N-flake#Hexaflake

(示例中的六边形称为谢尔宾斯基六边形)

<!DOCTYPE html>

<html>
 <head>
    <title>Sierpinski Hexagon</title>
    <style>
        body {
            background-color: darkblue;
        }
        #canvasDiv {
            margin-left: 1%;
            margin-right: 1%;
            text-align: center;
        }
    </style>
</head>

<body>
    <div id="canvasDiv">
        <canvas id="_canvas"></canvas>
    </div>
</body>
 
<script src="https://cdn.jsdelivr.net/processing.js/1.4.8/processing.min.js"></script>
 
<script>
    var canvasWidth = 900;
    var canvasHeight = 800;
    
    var X = canvasWidth/2;
    var Y = canvasHeight/2;
    
    var N = 6;
    var R = 250;
    
    // For N = 6, Koch snowflake fractal pattern is most clearly visible with scaleDown = 0.42
    // 0.5 is just a plain hexagon
    // 0.45, 0.48, and 0.7 are interesting
    // 0.33 is very crisp, but many dots overlap
    var scaleDown = 0.42;
    
    
    var toRadians = function(degrees) {
        return degrees * Math.PI / 180;
    };
    
    var polygonVertices = function(p, n, centerX, centerY, r, clr) {
        p.strokeWeight(1);
        p.stroke(clr[0], clr[1], clr[2]);
        var theta = 360/n;
        for (var v = 0; v < n; v++) {
            p.point(centerX + r * Math.cos(toRadians(theta * v)), centerY + r * Math.sin(toRadians(theta * v)));
        }
    };
    
    var hyperfactorial = function(p, n, centerX, centerY, r, scaleDownFactor, depth) {
        if (depth == n) {
            polygonVertices(p, n, centerX, centerY, r, [Math.abs(X - centerX) * 500 / canvasWidth, Math.abs(Y - centerY) * 500 / canvasWidth, 255]);
            return
        }
        else {
            var theta = 360/n;
            for (var i = 0; i < n; i++) {
                hyperfactorial(p, n, centerX + r * Math.cos(toRadians(theta * i)), centerY + r * Math.sin(toRadians(theta * i)), r*scaleDownFactor, scaleDownFactor, depth + 1);
            }
        }
    };
    
    var applyProcessing = function(p) {
        p.setup = function() {
            p.size(canvasWidth, canvasHeight);
            p.background(0, 0, 40);
            hyperfactorial(p, N, X, Y, R, scaleDown, 1);
        };
    };
    
    var canvas = document.getElementById("_canvas");
    var pInstance = new Processing(canvas, applyProcessing);
    
 </script>

</html>   

Maybe not the most practical application, but still interesting: if you draw n^n dots (which of course is O(n^n)), with the following criteria:

  • Form a polygon with n vertices
  • Starting at each of these vertices, form another (smaller) polygon with n vertices
  • Repeat step 2 out to n iterations
  • Only draw dots on the nth iteration

you will end up with a really neat fractal pattern, dependent on the scale factor used to decrease polygon size each iteration.

If n = 6, this simple dot-drawing algorithm produces a pattern which draws many instances of the Koch snowflake fractal in the negative space between the dots (you can see this in the example Processing.js program)

Try experimenting with different values of n and scale factors (although you'll probably be limited to upper bound of around n = 8 or 9 by browser performance constraints) and you can get some of the other fractals described here: https://en.wikipedia.org/wiki/N-flake#Hexaflake

(The one in the example is called a Sierpinski hexagon)

<!DOCTYPE html>

<html>
 <head>
    <title>Sierpinski Hexagon</title>
    <style>
        body {
            background-color: darkblue;
        }
        #canvasDiv {
            margin-left: 1%;
            margin-right: 1%;
            text-align: center;
        }
    </style>
</head>

<body>
    <div id="canvasDiv">
        <canvas id="_canvas"></canvas>
    </div>
</body>
 
<script src="https://cdn.jsdelivr.net/processing.js/1.4.8/processing.min.js"></script>
 
<script>
    var canvasWidth = 900;
    var canvasHeight = 800;
    
    var X = canvasWidth/2;
    var Y = canvasHeight/2;
    
    var N = 6;
    var R = 250;
    
    // For N = 6, Koch snowflake fractal pattern is most clearly visible with scaleDown = 0.42
    // 0.5 is just a plain hexagon
    // 0.45, 0.48, and 0.7 are interesting
    // 0.33 is very crisp, but many dots overlap
    var scaleDown = 0.42;
    
    
    var toRadians = function(degrees) {
        return degrees * Math.PI / 180;
    };
    
    var polygonVertices = function(p, n, centerX, centerY, r, clr) {
        p.strokeWeight(1);
        p.stroke(clr[0], clr[1], clr[2]);
        var theta = 360/n;
        for (var v = 0; v < n; v++) {
            p.point(centerX + r * Math.cos(toRadians(theta * v)), centerY + r * Math.sin(toRadians(theta * v)));
        }
    };
    
    var hyperfactorial = function(p, n, centerX, centerY, r, scaleDownFactor, depth) {
        if (depth == n) {
            polygonVertices(p, n, centerX, centerY, r, [Math.abs(X - centerX) * 500 / canvasWidth, Math.abs(Y - centerY) * 500 / canvasWidth, 255]);
            return
        }
        else {
            var theta = 360/n;
            for (var i = 0; i < n; i++) {
                hyperfactorial(p, n, centerX + r * Math.cos(toRadians(theta * i)), centerY + r * Math.sin(toRadians(theta * i)), r*scaleDownFactor, scaleDownFactor, depth + 1);
            }
        }
    };
    
    var applyProcessing = function(p) {
        p.setup = function() {
            p.size(canvasWidth, canvasHeight);
            p.background(0, 0, 40);
            hyperfactorial(p, N, X, Y, R, scaleDown, 1);
        };
    };
    
    var canvas = document.getElementById("_canvas");
    var pInstance = new Processing(canvas, applyProcessing);
    
 </script>

</html>   
一世旳自豪 2024-11-16 20:33:57

该程序获取(终止)图灵机的描述,并返回终止所需的步骤数。这是一个编写起来相对简单的程序——它可以简单地模拟图灵机,并计算步数。

该程序的复杂性没有可计算的上限(特别是比任何可计算函数增长得更快),因此肯定比 O(n^n) 增长得更快。

大小为 n 的输入上最坏情况的运行时是 BB(n),即 Busy Beaver 序列从 0, 1, 4, 6, 13 开始,在此之后是未知的(尽管存在下限 - 例如接下来的两个值至少分别为 47176870 和 7.412×10^36534)并且对于 n 个大来说是不可计算的足够的。

The program that takes a description of a (terminating) Turing machine, and returns the number of steps it takes to terminate. This is a relatively simple program to write -- it can simply emulate the Turing machine, and count the steps.

The complexity of this program has no computable upper bound (and in particular grows faster than any computable function), so certainly grows faster than O(n^n).

The worst-case run-time on an input of size n is BB(n), the Busy Beaver sequence, which starts 0, 1, 4, 6, 13, is unknown after this (although lower bounds exists -- for example the next two values are at least 47176870 and 7.412×10^36534 respectively) and uncomputable for n large enough.

我恋#小黄人 2024-11-16 20:33:56

根据 维基百科,存在一些双指数时间问题 O(22< super>poly(n)) 比 O(nn) 更复杂,例如“Presburger 算术" (O(22cn)) 和“计算 Gröbner 基础"(最坏情况下 O(22n/ 10)

According to Wikipedia, there are some double exponential time problems O(22poly(n)) which is more complex than O(nn), e.g. "Decision procedures for Presburger arithmetic" (O(22cn)) and "Computing a Gröbner basis" (in worst case O(22n/10)

失与倦" 2024-11-16 20:33:56

有许多本质上是 O(n!) 的优化问题,即数据压缩问题。常见的算法都需要以某种方式进行欺骗(许多算法依赖于启发式算法),但无法确保通过这种方式找到了完美的结果。即在 PNG 图像压缩期间选择最佳的行过滤器是一个相对比较困难的问题。容易理解。

另一个例子是破解加密的算法,其性能可能比 O(n!) 还要糟糕。

There are many optimization problems that are essentially O(n!), i.e in data compression. The common algorithms for this all need to cheat one way or another (many rely on heuristics) but can't make sure that they have found the perfect result this way. I.e. choosing the optimal line filters during compression of a PNG image is such a problem that is comparatively easy to understand.

Another example are algorithms to break encryption which can potentially be even worse than O(n!).

没有伤那来痛 2024-11-16 20:33:55

有些计算(例如,tetration)的输出大小为 O(nn )。以小于 O(nn) 的时间复杂度来计算它们有点困难。

There are computations (for instance, tetration) where the output size is O(nn). It's kind of hard to compute them with time complexity less than O(nn).

濫情▎り 2024-11-16 20:33:54

您在示例中编写的代码与深度优先搜索非常相似。所以,这就是一个答案。

没有任何特殊特征(例如可以优化的重新收敛路径)的深度优先搜索算法应该是 n^n。

这实际上不是一个人为的例子。国际象棋程序采用相同的算法。每个动作有 n 个动作要考虑(即分支),并且您搜索 d 个动作深度。这样就变成了 O(n^d)

What you have coded in your example is very similar to a depth first search. So, that's one answer.

A depth first search algorithm without any special characteristics ( like re-convergent paths that can be optimized out ), should be n^n.

This is actually not a contrived example. Chess programs operate on the same algorithm. Each move there are n moves to consider ( i.e. branches ), and you search d moves deep. So that becomes O(n^d)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文