算法:关于复杂性和优化的解释

发布于 2024-10-11 01:47:32 字数 442 浏览 7 评论 0原文

我试图在网络上找到一两个来源来用简单的术语解释这些内容。另外,这个概念可以实际应用来改进算法吗?如果是这样,怎么办?以下是我从一位联系人那里得到的简要解释。

不知道哪里可以找到简单的 解释。但我试着向你解释。 算法复杂度是一个术语,即 解释输入之间的依赖性 需要的信息和资源 处理它。例如,如果您需要 要找到数组中的最大值,您应该 枚举所有元素并进行比较 与你的变量(例如最大值)。认为 数组中有 N 个元素。 所以,该算法的复杂度为 O(N),因为我们枚举了所有 元素一次。如果我们枚举 所有元素2次,复杂度 将是 O(N*N)。二分查找有 复杂度 O(log2(N)),因为它 将搜索区域减少一半 在每一步中。另外,您还可以 计算出你的空间复杂度 算法-空间之间的依赖关系, 计划所需的金额和金额 输入数据。

I'm trying to find a source or two on the web that explain these in simple terms. Also, can this notion be used in a practical fashion to improve an algorithm? If so, how? Below is a brief explanation I got from a contact.

I dont know where you can find simple
explanation. But i try to explain you.
Algorithm complexity is a term, that
explain dependence between input
information and resourses that need to
process it. For example, if you need
to find max in array, you should
enumerate all elements and compare it
with your variable(e.g. max). Suppose
that there are N elements in array.
So, complexity of this algorithm is
O(N), because we enumerate all
elements for one time. If we enumerate
all elements for 2 times, complexity
will be O(N*N). Binary search has
complexity O(log2(N)), because its
decrease area of search by a half
during every step. Also, you can
figure out a space complexity of your
algorithm - dependence between space,
required by program, and amount of
input data.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

云醉月微眠 2024-10-18 01:47:32

说出有关复杂性的所有事情并不容易,但我认为 wiki 对它有很好的解释,对于启动也很好,请参阅:

  1. 大O符号用于介绍
    这方面(你也可以看看
    也有 teta 和 omega 表示法)。
  2. 算法分析,了解一下
    关于复杂性更多。
  3. 以及计算复杂性
    这是计算机中的一个大理论
    科学。

关于优化,您可以查看网络和维基来找到它,但是您的朋友用五行代码提供了一个很好的示例来进行介绍,但这些不是一晚上的努力来理解它们的用法、计算和数千个理论。

总之,您可以根据需要熟悉它们,阅读维基,更高级的阅读书籍,例如 Gary和 Johnson 或阅读计算复杂性,一种现代方法,但是不要指望在那之后你就了解他们的一切。您还可以查看此讲义:http://www.cs.rutgers。 edu/~allender/lecture.notes/

It's not easy to say all things about complexity, but I think wiki has a good explanation on it and for startup is good, see:

  1. Big O notation for introducing
    this aspect (Also you can look at
    teta and omega notations too).
  2. Analysis of algorithm, to know
    about complexity more.
  3. And Computational Complexity,
    which is a big theory in computer
    science.

and about optimization you can look at web and wiki to find it, but with five line your friends give a good sample for introduction, but these are not one night effort for understanding their usage, calculation, and thousand of theory.

In all you can be familiar with them as needed, reading wiki, more advance reading books like Gary and Johnson or read Computation Complexity, a modern approach, but do not expect you know everything about them after that. Also you can see this lecture notes: http://www.cs.rutgers.edu/~allender/lecture.notes/.

酒中人 2024-10-18 01:47:32

正如你的朋友所暗示的,这不是一个简单的话题。但值得投入一些时间来学习。查看这本书,通常用作计算机科学算法课程的教科书。

As your friend hinted, this isn't a simple topic. But it is worth investing some time to learn. Check out this book, commonly used as a textbook in CS courses on algorithms.

哽咽笑 2024-10-18 01:47:32

斯坦福大学编程入门课程中使用的课程阅读器有传奇计算机科学教育家埃里克·罗伯茨关于算法分析的精彩章节。全文在线此链接,第 8 章可能值得一看。

The course reader used in Stanford's introductory programming classes has a great chapter on algorithmic analysis by legendary CS educator Eric Roberts. The whole text is online at this link, and Chapter 8 might be worth looking at.

娜些时光,永不杰束 2024-10-18 01:47:32

您可以观看计算机程序的结构和解释。这是一门很好的麻省理工学院课程。

You can watch Structure and Interpretation of computer programs. It's a nice MIT course.

甜宝宝 2024-10-18 01:47:32

此外,这个概念可以实际应用来改进算法吗?如果是,怎么做?

它并不是主要用于改进算法,而是评估算法的性能并决定您选择使用哪种算法。对于任何给定的问题,您确实希望避免具有 O(N!) 或 O(N^x) 的算法,因为当 N(您的输入)的大小增加时,它们的速度会急剧下降。你想要的是 O(N) 或 O(log(N)) 甚至更好的 O(1)。

O(1) 是常数时间,这意味着算法执行一百万个输入所需的时间与执行一个输入所需的时间相同。 O(N) 当然是线性的,这意味着执行算法所需的时间与其输入成比例地增加。

甚至有些问题,任何为解决这些问题而开发的算法最终都是 O(N!)。基本上无法开发出快速算法来完全解决该问题(此类问题称为 NP 完全问题)。一旦你意识到你正在处理这样的问题,你就可以稍微放松你的要求,并通过“作弊”来不完美地解决问题。这些作弊不一定能找到最佳解决方案,而是满足于足够好的解决方案。我最喜欢的秘籍是遗传/进化算法和彩虹表。

理解算法复杂性如何改变你对编程的看法的另一个例子是微优化。或者更确切地说,不这样做。您经常会看到新手提出诸如 is ++x 比 x++ 快 之类的问题。经验丰富的程序员大多不关心,通常会回答优化的第一条规则是:不要

更有用的答案应该是将 x++ 更改为 ++x 不会以任何方式改变您的算法复杂性。与任何形式的微优化相比,算法的复杂性对代码速度的影响要大得多。例如,对您来说,查看代码并减少深度嵌套的 for 循环数量比担心编译器如何将代码转换为程序集要高效得多。

另一个例子是,在游戏编程中,如何违反直觉地加速代码涉及添加更多代码而不是减少代码。添加的代码采用过滤器(基本上是 if..else 语句)的形式,决定哪些数据需要进一步处理以及哪些可以丢弃。从微优化器的角度来看,添加代码意味着 CPU 可以执行更多指令。但实际上,这些过滤器通过丢弃数据来减少问题空间,因此总体运行速度更快。

Also, can this notion be used in a practical fashion to improve an algorithm? If so, how?

It's not so much used for improving an algorithm but evaluating the performance of algorithms and deciding on which algorithm you choose to use. For any given problem, you really want to avoid algorithms that has O(N!) or O(N^x) since they slow down dramatically when the size of N (your input) increases. What you want is O(N) or O(log(N)) or even better O(1).

O(1) is constant time which means the algorithm takes the same amount of time to execute for a million inputs as it does for one. O(N) is of course linear which means the time it takes to execute the algorithm increases in proportion to its input.

There are even some problems where any algorithm developed to solve them end up being O(N!). Basically no fast algorithm can be developed to solve the problem completely (this class of problems is known as NP-complete). Once you realize you're dealing with such a problem you can relax your requirements a bit and solve the problem imperfectly by "cheating". These cheats don't necessarily find the optimal solution but instead settle for good enough. My favorite cheats are genetic/evolutionary algorithms and rainbow tables.

Another example of how understanding algorithmic complexity changes how you think about programming is micro-optimizations. Or rather, not doing it. You often see newbies asking questions like is ++x faster than x++. Seasoned programmers mostly don't care and will usually reply the first rule of optimization is: don't.

The more useful answer should be that changing x++ to ++x does not in any way alter your algorithm complexity. The complexity of your algorithm has a much greater impact on the speed of your code than any form of micro-optimization. For example, it is much more productive for you to look at your code and reduce the number of deeply nested for loops than it is to worry about how your compiler turns your code to assembly.

Yet another example is how in games programming speeding up code counter-intuitively involve adding even more code instead of reducing code. The added code are in the form of filters (basically if..else statements) that decides which bit of data need further processing and which can be discarded. Form a micro-optimizer point of view adding code means more instructions for the CPU to execute. But in reality those filters reduce the problem space by discarding data and therefore run faster overall.

千柳 2024-10-18 01:47:32

无论如何,理解数据结构、算法和大O。
仔细、良好地设计代码,并使其尽可能简单。

但这还不够。

优化的关键是知道如何在代码编写之后发现问题。

这是一个示例。

By all means, understand data structures, algorithms, and big-O.
Design code carefully and well, keeping it as simple as possible.

But that's not enough.

The key to optimizing is knowing how to find problems, after the code is written.

Here's an example.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文