如何实现Oracle PL/SQL测试隔离测试?

发布于 2024-11-15 10:23:52 字数 832 浏览 3 评论 0原文

在 Java 项目中,JUnit 测试执行设置、测试和拆卸。即使使用内存数据库模拟真实数据库时,您通常也会回滚事务或从内存中删除数据库并在每次测试之间重新创建它。这为您提供了测试隔离,因为一个测试不会在环境中留下可能影响下一个测试的工件。每个测试都以已知状态开始,并且不会渗透到另一个状态。

现在我已经有了一个 Oracle 数据库构建,它创建了 1100 个表和 400K 代码 - 很多 pl/sql 包。我不仅想测试数据库安装(完整 - 从头开始​​创建,部分 - 从以前的数据库升级等)并确保所有表和其他对象在安装后处于我期望的状态,但也在 pl/sql 上运行测试(我不确定我将如何准确地执行前者 - 建议?)。

我希望这一切都从 Jenkins for CI 运行,以便通过回归测试捕获开发错误。

首先,我必须使用企业版而不是 XE,因为 XE 不支持 java SP 并且依赖于 Oracle Web Flow。即使我消除了这些依赖项,构建通常也需要 1.5 小时才能加载(完整构建)。

那么如何在这种环境中实现测试隔离呢?每次测试都使用事务并回滚它们?好的,那么那些有提交的 pl/sql 过程呢?

我考虑过备份和恢复以在每次测试后重置数据库,或者在每次测试之间重新创建整个数据库(太激烈了)。两者都不切实际,因为安装需要一个多小时。每次测试都这样做是矫枉过正和疯狂的。

有没有办法在数据库模式中画一条线,然后将其回滚到该时间点?有点像一个大的“撤消”功能。除了 expdp/impdp 或 rman 之外的东西。也许整个方法都已经失效了。建议?其他人是如何做到这一点的?

对于 CI 或小型生产升级窗口,整个测试套件必须在合理的时间内运行(30 分钟是理想的)。

是否有产品可以帮助实现这种“撤消”能力?

In Java projects, JUnit tests do a setup, test, teardown. Even when mocking out a real db using an in-memory db, you usually rollback the transaction or drop the db from memory and recreate it between each test. This gives you test isolation since one test does not leave artifacts in an environment that could effect the next test. Each test starts out in a known state and cannot bleed over into another one.

Now I've got an Oracle db build that creates 1100 tables and 400K of code - a lot of pl/sql packages. I'd like to not only test the db install (full - create from scratch, partial - upgrade from a previous db, etc) and make sure all the tables, and other objects are in the state I expect after the install, but ALSO run tests on the pl/sql (I'm not sure how I'd do the former exactly - suggestions?).

I'd like this all to run from Jenkins for CI so that development errors are caught via regression testing.

Firstly, I have to use an enterprise version instead of XE because of XE doesn't support java SPs and a dependency on Oracle Web Flow. Even if I eliminate those dependencies, the build typically takes 1.5 hours just to load (full build).

So how do you acheive test isolation in this environment? Use transactions for each test and roll them back? OK, what about those pl/sql procedures that have commits in them?

I thought about backup and recovery to reset the db after each test, or recreate the entire db between each tests (too drastic). Both are impractical since it takes over an hour to install it. Doing so for each test is overkill and insane.

Is there a way to draw a line in the sand in the db schema(s) and then roll it back to that point in time? Sorta like a big 'undo' feature. Something besides expdp/impdp or rman. Perhaps the whole approach is off. Suggestions? How have others done this?

For CI or a small production upgrade window, the whold test suite has to run with in a reasonable time (30 mins would be ideal).

Are there products that might help acheive this 'undo' ability?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

腻橙味 2024-11-22 10:23:52

Kevin McCormack 在服务器实验室博客上发表了一篇关于使用 Maven 和 Hudson 进行 PL/SQL 持续集成测试的文章。 查看一下。测试组件的关键成分是 Steven Feuerstein 的 utPlsql 框架,它是 JUnit 概念在 PL/ 中的实现。 SQL。

需要重置我们的测试装置是 PL/SQL 测试的大问题之一。有帮助的一件事是观察良好实践并避免在存储过程中提交:事务控制应仅限于调用堆栈的最外层部分。对于那些必须发出提交的程序(可能是隐式的,因为它们执行 DDL),总是有一个发出 DELETE 语句的测试装置。处理关系完整性使得编码变得非常棘手。

另一种方法是使用数据泵。您似乎放弃了 impdp,但 Oracle 还为其提供了 PL/SQL API,DBMS_DATAPUMP。我在这里建议它,因为它提供了在运行导入之前删除任何现有数据的能力。所以我们可以有一个导出的数据集作为我们的测试夹具;执行SetUp 就是运行Data Pump 作业。您不需要在 TearDown 中执行任何操作,因为清理工作是在 SetUp 开始时发生的。

Kevin McCormack published an article on The Server Labs Blog about continuous integration testing for PL/SQL using Maven and Hudson. Check it out. The key ingredient for the testing component is Steven Feuerstein's utPlsql framework, which is an implementation of JUnit's concepts in PL/SQL.

The need to reset our test fixtures is one of the big issues with PL/SQL testing. One thing which helps is to observe good practice and avoid commits in stored procedures: transactional control should be restricted to only the outermost parts of the call stack. For those programs which simply must issue commits (perhaps implicitly because they execute DDL) there is always a test fixture which issues DELETE statements. Handling relational integrity makes those quite tricky to code.

An alternative approach is to use Data Pump. You appear to discard impdp but Oracle also provides PL/SQL API for it, DBMS_DATAPUMP. I suggest it here because it provides the ability to trash any existing data prior to running an import. So we can have an exported data set as our test fixture; to execute a SetUp is a matter of running a Data Pump job. You don't need do do anything in the TearDown, because that tidying up happens at the start of the SetUp.

天涯离梦残月幽梦 2024-11-22 10:23:52

在 Oracle 中,您可以使用闪回技术将服务恢复到某个时间点。

http://download.oracle.com/docs/cd/B28359_01/backup。 111/b28270/rcmflash.htm

In Oracle you can use Flashback Technology to restore the serve to a point back in time.

http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmflash.htm

一笑百媚生 2024-11-22 10:23:52

对于 1100 个表和 400K 代码来说,1.5 小时似乎很长。我显然不知道你的环境细节,但根据我的经验,我打赌你可以将其缩短到 5 到 10 分钟。以下是我在 Oracle 中遇到的两个主要安装脚本问题:

1.操作被分成很小的部分

你的步骤越多,开销就越大。例如,您希望尽可能地合并这样的代码:

Replace:

create table x(a number, b number, c number);
alter table x modify a not null;
alter table x modify b not null;
alter table x modify c not null;

With:

create table x(a number not null, b number not null, c number not null);

Replace:

insert into x values (1,2,3);
insert into x values (4,5,6);
insert into x values (7,8,9);

With:

insert into x
select 1,2,3 from dual union all
select 4,5,6 from dual union all
select 7,8,9 from dual;

如果您在不同位置运行脚本和数据库,则尤其如此。当你将其乘以 10,000 时,微小的网络延迟就开始变得重要。我所知道的每个 Oracle SQL 工具一次都会发送一个命令。

2.开发人员必须共享数据库

这更像是一个长期流程解决方案,而不是技术修复,但您必须在某个时候开始。大多数使用 Oracle 的地方仅将其安装在少数服务器上。然后它就成为必须精心管理的稀缺资源。人们为此争论不休,角色不明确,事情也没有得到解决。

如果这就是您的环境,请停止疯狂并立即在每台笔记本电脑上安装 Oracle。花几百块钱给大家发个个人版(和企业版功能一样)。为每个人提供他们需要的工具,持续改进将最终解决您的问题。


另外,对于模式“撤消”,您可能需要研究可传输表空间。我从未使用过它,但据说这是一种更快的安装系统的方法 - 只需复制和粘贴文件而不是导入。同样,也许某种类型的虚拟化可以提供帮助 - 创建操作系统和数据库的快照。

1.5 hours seems like a very long time for 1100 tables and 400K of code. I obviously don't know the details of your envrionment, but based on my experience I bet you can shrink that to 5 to 10 minutes. Here are the two main installation script problems I've seen with Oracle:

1. Operations are broken into tiny pieces

The more steps you have the more overhead there will be. For example, you want to consolidate code like this as much as possible:

Replace:

create table x(a number, b number, c number);
alter table x modify a not null;
alter table x modify b not null;
alter table x modify c not null;

With:

create table x(a number not null, b number not null, c number not null);

Replace:

insert into x values (1,2,3);
insert into x values (4,5,6);
insert into x values (7,8,9);

With:

insert into x
select 1,2,3 from dual union all
select 4,5,6 from dual union all
select 7,8,9 from dual;

This is especially true if you run your script and your database in different locations. That tiny network lag starts to matter when you multiply it by 10,000. Every Oracle SQL tool I know of will send one command at a time.

2. Developers have to share a database

This is more of a long-term process solution than a technical fix, but you have to start sometime. Most places that use Oracle only have it installed on a few servers. Then it becomes a scarce resource that must be carefully managed. People fight over it, roles are unclear, and things don't get fixed.

If that's your environment, stop the madness and install Oracle on every laptop right now. Spend a few hundred dollars and give everyone personal edition (which has the same features as Enterprise Edition). Give everyone the tools they need and continous improvment will eventually fix your problems.


Also, for a schema "undo", you may want to look into transportable tablespaces. I've never used it, but supposedly it's a much faster way of installing a system - just copy and paste files instead of importing. Similiarly, perhaps some type of virtualization can help - create a snapshot of the OS and database.

公布 2024-11-22 10:23:52

尽管 Oracle Flashback 是企业版功能,但它所基于的技术在所有版本中均可用,即 Oracle Log Miner:

http://docs.oracle.com/cd/B28359_01/server.111/b28319/logminer.htm#i1016535

我很想知道是否有人使用它来提供测试隔离对于功能测试,即查询 v$LOGMNR_CONTENTS 以获取与测试开始相对应的时间点的 UNDO 语句列表。

数据库需要处于归档模式,并且在 junit 测试用例中,注释的方法

@Startup 

将调用 DBMS_LOGMNR.START_LOGMNR。测试将运行,然后在带有注释的方法中

@Teardown

查询 v$LOGMNR_CONTENTS 以查找 UNDO 语句列表。然后这些将通过 JDBC 执行。事实上,UNDO 语句的查询和执行可以提取到 PLSQL 存储过程中。必须考虑语句执行的顺序。

我认为这样做的好处是允许事务提交,这可能会产生大量错误,例如引用完整性、主键违规等。

Although Oracle Flashback is an Enterprise Edition feature the technology it is based on is available in all editions namely Oracle Log Miner:

http://docs.oracle.com/cd/B28359_01/server.111/b28319/logminer.htm#i1016535

I would be interested to know whether anybody has used this to provide test isolation for functional tests i.e. querying v$LOGMNR_CONTENTS to get a list of UNDO statements from a point of time corresponding to the beginning of the test.

The database needs to be in archive mode and in the junit test case a method annotated with

@Startup 

would call DBMS_LOGMNR.START_LOGMNR. The test would run and then in a method annotated with

@Teardown

would be query v$LOGMNR_CONTENTS to find the list of UNDO statements. These would then be executed via JDBC. In fact the querying and execution of the UNDO statements could be extracted into a PLSQL stored procedure. The order that the statements executed would have to be considered.

I think this has the benefit allowing the transaction to commit which is where an awful lot of bugs can creep in i.e. referential integrity, primary key violations etc.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文