使用 Throwable 处理异常以外的情况
我总是在错误的上下文中看到 Throwable/Exception。但我可以想到扩展 Throwable 只是为了摆脱递归方法调用堆栈的情况,这真的很好。举例来说,您试图通过递归搜索的方式查找并返回树中的某个对象。一旦找到它,就把它粘在一些Carrier extends Throwable
中并抛出它,并在调用递归方法的方法中捕获它。
优点:不用担心递归调用的返回逻辑;既然您找到了所需的内容,为什么还要担心如何将该引用带回方法堆栈呢?
负面:您有不需要的堆栈跟踪。而且 try/catch
块变得违反直觉。
这是一个非常简单的用法:
public class ThrowableDriver {
public static void main(String[] args) {
ThrowableTester tt = new ThrowableTester();
try {
tt.rec();
} catch (TestThrowable e) {
System.out.print("All good\n");
}
}
}
public class TestThrowable extends Throwable {
}
public class ThrowableTester {
int i=0;
void rec() throws TestThrowable {
if(i == 10) throw new TestThrowable();
i++;
rec();
}
}
问题是,是否有更好的方法来实现同样的目标?另外,这样做有什么本质上的坏处吗?
I have always seen Throwable/Exception in the context of errors. But I can think of situations where it would be really nice to extend a Throwable
just to break out of a stack of recursive method calls. Say, for example, you were trying to find and return some object in a tree by the way of a recursive search. Once you find it stick it in some Carrier extends Throwable
and throw it, and catch it in the method that calls the recursive method.
Positive: You don't have to worry about the return logic of the recursive calls; since you found what you needed, why worry how you would carry that reference back up the method stack.
Negative: You have a stack trace that you don't need. Also the try/catch
block becomes counter-intuitive.
Here is an idiotically simple usage:
public class ThrowableDriver {
public static void main(String[] args) {
ThrowableTester tt = new ThrowableTester();
try {
tt.rec();
} catch (TestThrowable e) {
System.out.print("All good\n");
}
}
}
public class TestThrowable extends Throwable {
}
public class ThrowableTester {
int i=0;
void rec() throws TestThrowable {
if(i == 10) throw new TestThrowable();
i++;
rec();
}
}
The question is, is there a better way to attain the same thing? Also, is there something inherently bad about doing things this way?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
实际上,在某些“普通”程序员不会想到使用异常的情况下使用异常是一个很好的主意。例如,在启动“规则”并发现它不起作用的解析器中,异常是恢复到正确恢复点的好方法。 (这在某种程度上类似于您打破递归的建议。)
有一个经典的反对意见,即“异常并不比 goto 更好”,这显然是错误的。在Java和大多数其他相当现代的语言中,您可以拥有嵌套的异常处理程序和
finally
处理程序,因此当通过异常转移控制权时,设计良好的程序可以执行清理等操作。事实上,在这种情况下异常在某些方面比返回代码更可取,因为使用返回代码,您必须在每个返回点添加逻辑来测试返回代码并找到并执行正确的finally
逻辑(可能是几个嵌套的部分)在退出例程之前。对于异常处理程序,这是通过嵌套异常处理程序相当自动的。异常确实伴随着一些“包袱”——Java 中的堆栈跟踪,例如。但 Java 异常实际上非常高效(至少与其他一些语言的实现相比),因此如果您没有过多地使用异常,那么性能应该不是一个大问题。
[我要补充一点,我有 40 年的编程经验,从 70 年代末开始我就一直在使用异常。独立“发明”try/catch/finally(称为 BEGIN/ABEXIT/EXIT),大约 1980 年。]
一个“非法”的题外话:
我认为在这些讨论中经常忽略的一点是,计算中的第一个问题不是成本或复杂性或标准或性能,但控制。
我所说的“控制”并不是指“控制流”或“控制语言”或“操作员控制”或任何其他经常使用术语“控制”的上下文。我的意思确实是“复杂性的控制”,但它不仅如此——它是“概念控制”。
我们都这样做过(至少我们这些编程时间超过 6 周的人)——开始编写一个“简单的小程序”,没有真正的结构或标准(除了我们可能习惯使用的那些),不用担心它的复杂性,因为它“简单”并且“一次性”。但是,根据具体情况,在十分之一或一百个情况下,“简单的小程序”可能会成长为一个怪物。
我们失去了对它的“概念控制”。修复一个错误会引入另外两个错误。程序的控制和数据流变得不透明。它的行为方式是我们无法完全理解的。
然而,按照大多数标准,这个“简单的小程序”并没有那么复杂。实际上并没有那么多行代码。很可能(因为我们是熟练的程序员)它被分成“适当”数量的子例程。通过复杂性测量算法运行它,并且很可能(因为它仍然相对较小并且“子例程化”)它的得分不是特别复杂。
最终,维护概念控制是几乎所有软件工具和语言背后的驱动力。是的,像汇编器和编译器这样的东西让我们更有生产力,而生产力是所谓的驱动力,但生产力的提高很大程度上是因为我们不必忙于“不相关”的细节,而是可以专注于我们想要的概念来实施。
概念控制的重大进步出现在计算历史的早期,随着“外部子例程”的出现,并且变得越来越独立于其环境,允许“关注点分离”,子例程开发人员不需要太多了解子例程的环境,并且子程序的用户不需要了解太多子程序的内部结构。
BEGIN/END 和“{...}”的简单开发产生了类似的进步,因为即使是“内联”代码也可以从“外面”和“这里”之间的某种隔离中受益。
我们认为理所当然的许多工具和语言功能都存在并且很有用,因为它们有助于维持对更加复杂的软件结构的智能控制。人们可以通过新工具或功能如何帮助这种智能控制来相当准确地衡量它的实用性。
剩下的最大困难之一是资源管理。这里的“资源”是指任何实体——对象、打开的文件、分配的堆等——可能在程序执行过程中“创建”或“分配”,并且随后需要某种形式的释放。 “自动堆栈”的发明是这里的第一步——变量可以“在堆栈上”分配,然后在“分配”它们的子例程退出时自动删除。 (这一度是一个非常有争议的概念,许多“权威”建议不要使用该功能,因为它会影响性能。)
但在大多数(所有?)语言中,这个问题仍然以某种形式存在。使用显式堆的语言需要“删除”您“新”的任何内容,例如。打开的文件必须以某种方式关闭。必须释放锁。其中一些问题可以巧妙地解决(例如使用 GC 堆)或掩盖(引用计数或“父子关系”),但没有办法消除或隐藏所有问题。而且,虽然在简单情况下管理这个问题相当简单(例如,
new
一个对象,调用使用它的子例程,然后删除
它),但现实生活中很少如此简单。有一种方法进行十几个不同的调用,在调用之间随机分配资源,这些资源具有不同的“生命周期”,这种情况并不罕见。某些调用可能会返回改变控制流的结果,在某些情况下会导致子例程退出,或者它们可能会导致子例程主体的某些子集出现循环。知道如何在这种情况下释放资源(释放所有正确的资源,而不释放任何错误的资源)是一个挑战,并且随着子例程随着时间的推移而修改(就像所有复杂的代码一样),它会变得更加复杂。try/finally 机制的基本概念(暂时忽略 catch 方面)很好地解决了上述问题(我承认,尽管远非完美)。对于需要管理的每个新资源或资源组,程序员引入一个 try/finally 块,将释放逻辑放置在 finally 子句中。除了确保资源被释放的实际方面之外,这种方法的优点是清楚地界定所涉及资源的“范围”,提供一种“强制维护”的文档。
该机制与 catch 机制相结合的事实有点偶然,因为在正常情况下用于管理资源的机制与在“异常”情况下管理资源的机制相同。由于“异常”(表面上)很少见,因此明智的做法是尽量减少该罕见路径中的逻辑量,因为它永远不会像主线那样经过良好的测试,而且“概念化”错误情况对于普通人来说特别困难程序员。
诚然,
try/finally
有一些问题。其中第一个是块可能嵌套得太深,以至于程序结构变得模糊而不是清晰。但这是 do 循环和 if 语句的共同问题,它等待语言设计者的一些启发性见解。更大的问题是try/finally
具有catch
(甚至更糟糕的是异常)包袱,这意味着它不可避免地沦为二等公民。 (例如,除了现已弃用的 JSB/RET 机制之外,finally
在 Java 字节码中甚至不作为概念存在。)还有其他方法。 IBM iSeries(或“System i”或“IBM i”或无论他们现在如何称呼它)具有将清理处理程序附加到调用堆栈中给定调用级别的概念,以便在关联程序返回(或异常退出)时执行)。虽然目前的形式很笨拙,并且并不真正适合 Java 程序中所需的精细控制级别,但它确实指出了一个潜在的方向。
当然,在 C++ 语言系列(但不是 Java)中,能够将资源的类代表实例化为自动变量,并让对象析构函数在退出变量作用域时提供“清理”功能。 (请注意,该方案本质上是使用 try/finally。)这在很多方面都是一种出色的方法,但它需要一套通用的“清理”类或为每种不同类型定义一个新类资源,创建一个潜在的文本庞大但相对无意义的类定义“云”。 (而且,正如我所说,对于当前形式的 Java 来说,这不是一个选择。)
但是我离题了。
Actually, it's an excellent idea to use exceptions in some cases where "normal" programmers wouldn't think of using them. For instance, in a parser that starts down a "rule" and discovers that it doesn't work, an exception is a pretty good way to blow back to the correct recovery point. (This is similar to a degree to your suggestion of breaking out of recursion.)
There is the classical objection that "exceptions are no better than a goto", which is patently false. In Java and most other reasonably modern languages you can have nested exception handlers and
finally
handlers, and so when control is transferred via an exception a well-designed program can perform cleanup, etc. In fact, in this way exceptions are in several ways preferable to return codes, since with a return code you must add logic at EVERY return point to test the return code and find and execute the correctfinally
logic (perhaps several nested pieces) before exiting the routine. With exception handlers this is reasonably automatic, via nested exception handlers.Exceptions do come with some "baggage" -- the stack trace in Java, eg. But Java exceptions are actually quite efficient (at least compared to implementations in some other languages), so performance shouldn't be a big issue if you're not using exceptions too heavily.
[I'll add that I have 40 years of programming experience, and I've been using exceptions since the late 70s. Independently "invented" try/catch/finally (called it BEGIN/ABEXIT/EXIT) ca 1980.]
An "illegal" digression:
I think the thing that is often missed in these discussions is that the #1 problem in computing is not cost or complexity or standards or performance, but control.
By "control" I don't mean "control flow" or "control language" or "operator control" or any of the other contexts where the term "control" is frequently used. I do sort of mean "control of complexity", but it's more than that -- it's "conceptual control".
We've all done it (at least those of us that have been programming for longer than about 6 weeks) -- started out writing a "simple little program" with no real structure or standards (other than those we might habitually use), not worrying about its complexity, because it's "simple" and a "throwaway". But then, in maybe one case in 10 or one case in 100, depending on the context, the "simple little program" grows into a monstrosity.
We loose "conceptual control" over it. Fixing one bug introduces two more. The control and data flow of the program becomes opaque. It behaves in ways that we can't quite comprehend.
And yet, by most standards, this "simple little program" is not that complex. It's not really that many lines of code. Very likely (since we are skilled programmers) it's broken into an "appropriate" number of subroutines. Run it through a complexity measuring algorithm and likely (since it is still relatively small and "subroutine-ized") it will score as not particularly complex.
Ultimately, maintaining conceptual control is the driving force behind virtually all software tools and languages. Yes, things like assemblers and compilers make us more productive, and productivity is the claimed driving force, but much of that productivity improvement is because we don't have to busy ourselves with "irrelevant" details and can focus instead on the concepts we want to implement.
Major advancements in conceptual control occurred early in computing history as "external subroutines" came into existence and became more and more independent of their environments, allowing a "separation of concerns" where a subroutine developer did not need to know much about the subroutine's environment, and the user of the subroutine did not need to know much about the subroutine internals.
The simple development of BEGIN/END and "{...}" produced similar advancements, as even "inline" code could benefit from some isolation between "out there" and "in here".
Many of the tools and language features that we take for granted exist and are useful because they help maintain intellectual control over ever more complex software structures. And one can pretty accurately gauge the utility of a new tool or feature by how it aids in this intellectual control.
One if the biggest remaining areas of difficulty is resource management. By "resource" here, I mean any entity -- object, open file, allocated heap, etc -- that might be "created" or "allocated" in the course of program execution and subsequently need some form of deallocation. The invention of the "automatic stack" was a first step here -- variables could be allocated "on the stack" and then automatically deleted when the subroutine that "allocated" them exited. (This was a very controversial concept at one time, and many "authorities" advised against using the feature because it impacted performance.)
But in most (all?) languages this problem still exists in one form or another. Languages that use an explicit heap have the need to "delete" whatever you "new", eg. Opened files must be closed somehow. Locks must be released. Some of these problems can be finessed (using a GC heap, eg) or papered over (reference counts or "parenting"), but there's no way to eliminate or hide all of them. And, while managing this problem in the simple case is fairly straight-forward (eg,
new
an object, call the subroutine that uses it, thendelete
it), real life is rarely that simple. It's not uncommon to have a method that makes a dozen or so different calls, somewhat randomly allocating resources between the calls, with different "lifetimes" for those resources. And some of the calls may return results that change the control flow, in some cases causing the subroutine to exit, or they may cause a loop around some subset of the subroutine body. Knowing how to release resources in such a scenario (releasing all the right ones and none of the wrong ones) is a challenge, and it gets even more complex as the subroutine is modified over time (as all code of any complexity is).The basic concept of a
try/finally
mechanism (ignoring for a moment thecatch
aspect) addresses the above problem fairly well (though far from perfectly, I'll admit). With each new resource or group of resources that needs to be managed the programmer introduces atry/finally
block, placing the deallocation logic in the finally clause. In addition to the practical aspect of assuring that the resources will be released, this approach has the advantage of clearly delineating the "scope" of the resources involved, providing a sort of documentation that is "forcefully maintained".The fact that this mechanism is coupled with the
catch
mechanism is a bit of serendipity, as the same mechanism that is used to manage resources in the normal case is used to manage them in the "exception" case. Since "exceptions" are (ostensibly) rare, it is always wise to minimize the amount of logic in that rare path, since it will never be as well tested as the mainline, and since "conceptualizing" error cases is particularly difficult for the average programmer.Granted,
try/finally
has some problems. One of the first among them is that the blocks can become nested so deeply that the program structure becomes obscured rather than clarified. But this is a problem in common withdo
loops andif
statements, and it awaits some inspired insight from a language designer. The bigger problem is thattry/finally
has thecatch
(and even worse, exception) baggage, meaning that it is inevitably relegated to be a second-class citizen. (Eg,finally
doesn't even exist as a concept in Java bytecodes, beyond the now-deprecated JSB/RET mechanism.)There are other approaches. IBM iSeries (or "System i" or "IBM i" or whatever they call it now) has the concept of attaching a cleanup handler to a given invocation level in the call stack, to be executed when the associated program returns (or exits abnormally). While this, in its current form, is clumsy and not really suited to the fine level of control needed in a Java program, eg, it does point at a potential direction.
And, of course, in the C++ language family (but not Java) there is the ability to instantiate a class representative of the resource as an automatic variable and have the object destructor provide "cleanup" on exit from the variable's scope. (Note that this scheme, under the covers, is essentially using try/finally.) This is an excellent approach in many ways, but it requires either a suite of generic "cleanup" classes or the definition of a new class for each different type of resource, creating a potential "cloud" of textually bulky but relatively meaningless class definitions. (And, as I said, it's not an option for Java in its present form.)
But I digress.
对程序控制流使用异常并不是一个好主意。
对于超出正常操作标准的情况,保留例外情况。
关于SO有很多相关问题:
“使用异常”的示例控制流程”
Java 异常有多慢?
为什么不使用异常作为常规控制流?
Using exceptions for program control flow is not a good idea.
Reserve exceptions for exactly that, for circumstances that are outside of the normal operating criteria.
There are quite a few related questions on SO:
Example of “using exceptions to control flow”
How slow are Java exceptions?
Why not use exceptions as regular flow of control?
语法变得不稳定,因为它们不是为一般控制流而设计的。递归函数设计的标准做法是一路返回哨兵值或找到的值(或者什么也不返回,这在您的示例中有效)。
传统观点:“特殊情况才有例外。”正如您所注意到的,Throwable 理论上听起来更通用,但除了异常和错误之外,它似乎并不是为更广泛的用途而设计的。来自文档:
许多运行时 (VM) 的设计并不是围绕抛出异常进行优化,这意味着它们可能“昂贵”。当然,这并不意味着您不能这样做,并且“昂贵”是主观的,但通常情况下这并没有完成,其他人会惊讶地在您的代码中发现它。
The syntax becomes wonky because they're not designed for general control flow. Standard practice in recursive function design is to return either a sentinel value or the found value (or nothing, which would work in your example) all the way back up.
Conventional wisdom: "Exceptions are for exceptional circumstances." As you note,
Throwable
sounds in theory more generalized, but except for Exceptions and Errors, it doesn't seem designed for broader use. From the docs:Many runtimes (VMs) are designed not to optimize around throwing exceptions, meaning they can be "expensive". That doesn't mean you couldn't do this, of course, and "expensive" is subjective, but generally this isn't done, and others would be surprised to find it in your code.
关于你的第二个问题,无论编译器的效率如何,异常都会带来很大的运行时负担。仅此一点就应该反对在一般情况下将它们用作控制结构。
此外,异常相当于受控的 goto,几乎相当于长跳转。是的,是的,它们可以嵌套,并且在像 Java 这样的语言中,您可以拥有漂亮的“finally”块等等。尽管如此,这就是它们的全部,因此,它们并不意味着一般情况可以替代典型的控制结构。四十多年的集体工业知识告诉我们,一般来说,我们应该避免此类事情除非你有非常充分的理由这样做。
这就是你第一个问题的核心。是的,有一个更好的方法(以您的代码为例)...只需使用典型的控制结构:
看到了吗?更简单。更少的代码行。没有多余的 try/catch 或不必要的异常抛出。你也能达到同样的效果。
最后,我们的工作不是玩弄语言结构,而是创建合理的、从可维护性角度来看足够简单的程序,只用足够的语句来完成工作,而不需要其他任何东西。
因此,当谈到您提供的示例代码时,您必须问自己:通过这种方法我得到了什么是使用典型控制结构时无法得到的?
如果您不担心返回逻辑,则只需忽略返回或将您的方法定义为 void 类型。将其包装在 try/catch 中只会使代码变得比必要的更加复杂。如果您不关心返回结果,我相信您会关心完成的方法。因此,您所需要的只是简单地调用它(如我在本文中提供的代码示例中所示)。
在方法完成之前将返回值(几乎是 JVM 中的对象引用)推送到堆栈比执行与抛出异常相关的所有簿记(运行 Epilog 并填充潜在的大堆栈跟踪)要便宜并捕获它(遍历堆栈跟踪)。无论是否为 JVM,这都是基本的 CS 101 内容。
因此,不仅成本更高,而且您仍然需要输入更多字符来编码相同的内容。
实际上,没有任何递归方法可以通过 Throwable 退出,并且无法使用典型的控制结构进行重写。您需要有一个非常非常充分的理由来使用异常来代替控制结构。
Regarding your second question, exceptions carry a significant run-time burden, regardless of how efficient the compiler can be. That alone should speak against using them as control structures in the general case.
Furthermore, exceptions amount to controlled gotos, almost equivalent to long jumps. Yes, yes, they can be nested, and in languages like Java, you can have your nice 'finally' blocks and all. Still, that's all they are, and as such, they are not meant to be general-case replacements for your typical control structures. More than four decades of collective, industrial knowledge tells us than, in general, we should avoid such things UNLESS you have a very valid reason to do so.
And that goes to the hearth of your first question. Yes, there is a better way (taking your code as example)... simply use your typical control structures:
See? Simpler. Less lines of code. No redundant try/catch or unnecessary exception throwing. You achieve the same.
In the end, our job is not to play with language constructs, but to create programs that are sensible, sufficiently simple for a maintainability point of view, with just enough statements to get the job done and with nothing else.
So, when it comes to the example code that you provided, you have to ask yourself: what did I get with that approach that I cannot get when using typical control structures?
If you don't worry about the return logic, then simply ignore the return or define your method to be of type void. Wrapping it in a try/catch simply makes the code more complex than necessary. If you don't care about the return, I'm sure you care about the method to complete. So all you need is to simply call it (as in the code sample I provided with this post).
It is cheaper to get push the return (pretty much an object reference in the JVM) to the stack before completion of the method than to do all the book keeping involved with throwing an exception (running epilogs and filling up a potentially big stack trace) and catching it (traversing the stack trace.) JVM or not, this is basic CS 101 stuff.
So, not only it is more expensive, you still have to type more characters to code the same thing.
There is virtually no recursive method that you can exit via a Throwable that you cannot re-write using your typical control structures. You need to have a very, very, but very good reason to use an exception in lieu of control structures.
只是。不要。
请参阅:Effective Java,作者:Joshua Bloch,第 14 页243
Just. Don't.
See: Effective Java by Joshua Bloch, p. 243
我不知道这是否是一个好主意,但是在设计 CLI(不使用准备好的库)时,我突然想到,处理从应用程序中的某个位置返回而不弄乱系统堆栈的自然方法是:使用 Throwable(如果您只是调用到达此方法的方法,如果有人说在应用程序菜单中前进和后退大约 255 次,您将出现堆栈溢出)。由于使用 Throwable 返回与您在应用程序中的位置无关,因此它使我能够使方法抽象(字面意义上),即由类 X 的某些条目组成的所有菜单都用一种方法处理。
I do not know if it was a good idea or not, but while designing a CLI (without using prepared libraries) it occurred to me that a natural way to handle going back from a position in the application without messing up the system stack is to use a Throwable (if you just call the method from which you came to this one you will get STACK OVER FLOW if someone say goes forward and backward about 255 times in application menues). Since going back using a Throwable is independent from where you are in the application it gave me the power to make the methods abstract (in the literal sense) , i.e., all of the menus consisting of some entries of class X were handled with one method.