软件可扩展性测试是否有明确的模式?

发布于 2024-07-22 05:19:16 字数 288 浏览 5 评论 0原文

我最近对识别软件可扩展性测试的模式非常感兴趣。 由于不同软件解决方案的本质不同,可伸缩性测试软件问题的良好解决方案似乎与设计和实现软件的问题一样多。 对我来说,这意味着我们可能可以为此类广泛使用的测试提炼出一些模式。

为了消除歧义,我会提前说明我正在使用维基百科定义 可扩展性测试。

我最感兴趣的是提出具有详细描述的特定模式名称的答案。

I've recently become quite interested in identifying patterns for software scalability testing. Due to the variable nature of different software solutions, it seems to like there are as many good solutions to the problem of scalability testing software as there are to designing and implementing software. To me, that means that we can probably distill some patterns for this type of testing that are widely used.

For the purposes of eliminating ambiguity, I'll say in advance that I'm using the wikipedia definition of scalability testing.

I'm most interested in answers proposing specific pattern names with thorough descriptions.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

甜心 2024-07-29 05:19:16

我所知道的所有测试场景都使用相同的基本结构进行测试,其中涉及到针对要测试的处理代理的一个或多个请求者生成多个请求。 Kurt 的回答是这个过程的一个很好的例子。 一般来说,您将运行测试来找到一些阈值,并运行一些替代配置(更少的节点、不同的硬件等...)来建立准确的平均数据。

请求者可以是生成请求的机器、网卡、特定软件或软件中的线程。 它所做的只是生成一个可以以某种方式处理的请求。

处理代理是实际处理请求并返回结果的软件、网卡、机器。

然而,您对结果的处理方式决定了您正在执行的测试类型,它们是:

负载/性能测试:这是最常用的一种。 处理的结果是看在各个级别或者各种配置下处理了多少。 同样,库尔特在上面寻找的就是一个例子。

平衡测试:扩展的常见做法是使用负载平衡代理,将请求定向到流程代理。 设置与负载测试相同,但目标是检查请求的分布。 在某些情况下,您需要确保处理代理之间的请求达到均匀(或接近可接受的)平衡,而在其他情况下,您需要确保处理特定请求者的第一个请求的流程代理处理所有后续请求(通常需要像这样的网络场)。

数据安全:通过此测试,收集结果并比较数据。 您在这里要寻找的是锁定问题(例如 SQL 死锁),它会阻止写入或在可接受的时间内或更短的时间内将数据更改复制到您正在使用的各个节点或存储库。

边界测试:这与负载测试类似,只不过目标不是处理性能,而是存储多少会影响性能。 例如,如果您有一个数据库,在 I/O 性能下降到可接受的水平以下之前,您可以拥有多少行/表/列。

我还推荐容量规划的艺术作为关于该主题的一本优秀书籍。

All the testing scenarios I am aware of use the same basic structure for the test which involves generating a number of requests on one or more requesters targeted at the processing agent to be tested. Kurt's answer is an excellent example of this process. Generally you will run the tests to find some thresholds and also run some alternative configurations (less nodes, different hardware etc...) to build up an accurate averaged data.

A requester can be a machine, network card, specific software or thread in software that generates the requests. All it does is generate a request that can be processed in some way.

A processing agent is the software, network card, machine that actually processes the request and returns a result.

However what you do with the results determines the type of test you are doing and they are:

Load/Performance Testing: This is the most common one in use. The results are processed is to see how much is processed at various levels or in various configurations. Again what Kurt is looking for above is an example if this.

Balance Testing: A common practice in scaling is to use a load balancing agent which directs requests to a process agent. The setup is the same as for Load Testing, but the goal is to check distribution of requests. In some scenarios you need to make sure that an even (or as close to as is acceptable) balance of requests across processing agents is achieved and in other scenarios you need to make sure that the process agent that handled the first request for a specific requester handles all subsequent requests (web farms are commonly needed like this).

Data Safety: With this test the results are collected and the data is compared. What you are looking for here is locking issues (such as a SQL deadlock) which prevents writes or that data changes are replicated to the various nodes or repositories you have in use in an acceptable time or less.

Boundary Testing: This is similar to load testing except the goal is not processing performance but how much is stored effects performance. For example if you have a database how many rows/tables/columns can you have before the I/O performance drops below acceptable levels.

I would also recommend The Art of Capacity Planning as an excellent book on the subject.

樱桃奶球 2024-07-29 05:19:16

我可以在 Robert 的列表中添加另一种类型的测试:浸泡测试。 您选择一个适当的重测试负载,然后运行较长的一段时间 - 如果您的性能测试通常持续一个小时,则运行过夜、全天或整周。 您可以监控正确性和性能。 这个想法是检测随着时间的推移慢慢积累的任何类型的问题:比如内存泄漏、打包、偶尔的死锁、需要重建的索引等。

这是一种不同类型的可扩展性,但它很重要。 当您的系统离开开发车间并投入使用时,它不仅会通过添加更多负载和更多资源而“水平”变得更大,而且在时间维度上也会变得更大:它将在生产机器上不间断地运行几周、几个月或几年,这是开发过程中从未做过的。

I can add one more type of testing to Robert's list: soak testing. You pick a suitably heavy test load, and then run it for an extended period of time - if your performance tests usually last for an hour, run it overnight, all day, or all week. You monitor both correctness and performance. The idea is to detect any kind of problem which builds up slowly over time: things like memory leaks, packratting, occasional deadlocks, indices needing rebuilding, etc.

This is a different kind of scalability, but it's important. When your system leaves the development shop and goes live, it doesn't just get bigger 'horizontally', by adding more load and more resources, but in the time dimension too: it's going to be running non-stop on the production machines for weeks, months or years, which it hasn't done in development.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文