使用 mysql 池对 Node.js(集群)进行性能基准测试:Lighttpd + PHP?

发布于 2024-12-07 21:02:53 字数 3332 浏览 1 评论 0原文

编辑(2):现在使用 db-mysql 和 generic-pool 模块。错误率大幅下降,徘徊在 13%,但吞吐量仍然在 100 req/sec 左右。

编辑(1):在有人建议 ORDER BY RAND() 会导致 MySQL 变慢之后,我从查询中删除了该子句。 Node.js 现在徘徊在 100 个请求/秒左右,但服务器仍然报告“连接错误:连接太多”。

使用 PHP 的 Node.js 或 Lighttpd?

您可能看到了许多 Node.js 的“Hello World”基准测试...但是“hello world”测试,即使是那些每个请求延迟 2 秒的测试,也与现实世界的生产使用情况相距甚远。我还使用 node.js 执行了“Hello World”测试的这些变体,并看到吞吐量约为 800 个请求/秒,错误率为 0.01%。然而,我决定进行一些更现实的测试。

也许我的测试并不完整,很可能node.js或我的测试代码确实有问题,所以如果您是node.js专家,请帮助我编写一些更好的测试。我的结果发布如下。我使用 Apache JMeter 进行测试。

测试用例和系统规格

该测试非常简单。 mysql 查询用户数量是随机排序的。检索并显示第一个用户的用户名。 mysql数据库连接是通过unix套接字进行的。操作系统是 FreeBSD 8+。 8GB 内存。英特尔至强四核 2.x Ghz 处理器。在遇到 Node.js 之前,我对 Lighttpd 配置进行了一些调整。

Apache JMeter 设置

线程数(用户):5000 我相信这是并发连接数

加速周期(以秒为单位):1

循环计数:10 这是每个请求的数量user

Apache JMeter 最终结果

Label                  | # Samples | Average  | Min   | Max      | Std. Dev. | Error % | Throughput | KB/sec | Avg. Bytes

HTTP Requests Lighttpd | 49918     | 2060ms   | 29ms  | 84790ms  | 5524      | 19.47%  | 583.3/sec  | 211.79 | 371.8

HTTP Requests Node.js  | 13767     | 106569ms | 295ms | 292311ms | 91764     | 78.86%  | 44.6/sec   | 79.16  | 1816

结果结论

Node.js 太糟糕了,我不得不提前停止测试。 [已修复已完全测试]

Node.js 在服务器上报告“连接错误:连接数过多”。 [已修复]

大多数时候,Lighttpd 的吞吐量约为 1200 请求/秒。

然而,node.js 的吞吐量约为 29 req/sec。 [已修复现在速度为 100req/sec]

这是我用于 node.js 的代码(使用 MySQL 池)

var cluster = require('cluster'), http = require('http'), mysql = require('db-mysql'), generic_pool = require('generic-pool');

var pool = generic_pool.Pool({
    name: 'mysql',
    max: 10,
    create: function(callback) {
        new mysql.Database({
            socket: "/tmp/mysql.sock",
            user: 'root',
            password: 'password',
            database: 'v3edb2011'
        }).connect(function(err, server) {
            callback(err, this);
        });
    },
        destroy: function(db) {
        db.disconnect();
    }
});

var server = http.createServer(function(request, response) {  
    response.writeHead(200, {"Content-Type": "text/html"});  
    pool.acquire(function(err, db) {
        if (err) {
            return response.end("CONNECTION error: " + err);
        }

        db.query('SELECT * FROM tb_users').execute(function(err, rows, columns) {
            pool.release(db);

            if (err) {
                return response.end("QUERY ERROR: " + err);
            }
            response.write(rows.length + ' ROWS found using node.js<br />');
            response.end(rows[0]["username"]);
        });
    });   
});

cluster(server)
  .set('workers', 5)
  .listen(8080);

这是我用于 PHP 的代码(Lighttpd + FastCGI)

<?php
  $conn = new mysqli('localhost', 'root', 'password', 'v3edb2011');
  if($conn) {
    $result = $conn->query('SELECT * FROM tb_users ORDER BY RAND()');
    if($result) {
      echo ($result->num_rows).' ROWS found using Lighttpd + PHP (FastCGI)<br />';
      $row = $result->fetch_assoc();
      echo $row['username'];
    } else {
      echo 'Error : DB Query';
    }
  } else {
    echo 'Error : DB Connection';
  }
?>

Edit(2): Now using db-mysql with generic-pool module. The error rate has dropped significantly and hovers at 13% but the throughput is still around 100 req/sec.

Edit(1): After someone suggesting that ORDER BY RAND() would cause MySQL to be slow, I had removed that clause from the query. Node.js now hovers around 100 req/sec but still the server reports 'CONNECTION error: Too many connections'.

Node.js or Lighttpd with PHP?

You probably saw many "Hello World" benchmarking of node.js... but "hello world" tests, even those that were delayed by 2 seconds per request, are not even close to real world production usage. I also performed those variations of "Hello World" tests using node.js and saw throughput of about 800 req/sec with 0.01% error rate. However, I decided to some tests that were a bit more realistic.

Maybe my tests are not complete, most likely something is REALLY wrong about node.js or my test code and so if your a node.js expert, please do help me write some better tests. My results are published below. I used Apache JMeter to do the testing.

Test Case and System Specs

The test is pretty simple. A mysql query for number of users is ordered randomly. The first user's username is retrieved and displayed. The mysql database connection is through a unix socket. The OS is FreeBSD 8+. 8GB of RAM. Intel Xeon Quad Core 2.x Ghz processor. I tuned the Lighttpd configurations a bit before i even came across node.js.

Apache JMeter Settings

Number of threads (users) : 5000 I believe this is the number of concurrent connections

Ramp up period (in seconds) : 1

Loop Count : 10 This is the number of requests per user

Apache JMeter End Results

Label                  | # Samples | Average  | Min   | Max      | Std. Dev. | Error % | Throughput | KB/sec | Avg. Bytes

HTTP Requests Lighttpd | 49918     | 2060ms   | 29ms  | 84790ms  | 5524      | 19.47%  | 583.3/sec  | 211.79 | 371.8

HTTP Requests Node.js  | 13767     | 106569ms | 295ms | 292311ms | 91764     | 78.86%  | 44.6/sec   | 79.16  | 1816

Result Conclusions

Node.js was so bad i had to stop the test early. [Fixed Tested completely]

Node.js reports "CONNECTION error: Too many connections" on the server. [Fixed]

Most of the time, Lighttpd had a throughput of about 1200 req/sec.

However, node.js had a throughput of about 29 req/sec. [Fixed Now at 100req/sec]

This is the code i used for node.js (Using MySQL pools)

var cluster = require('cluster'), http = require('http'), mysql = require('db-mysql'), generic_pool = require('generic-pool');

var pool = generic_pool.Pool({
    name: 'mysql',
    max: 10,
    create: function(callback) {
        new mysql.Database({
            socket: "/tmp/mysql.sock",
            user: 'root',
            password: 'password',
            database: 'v3edb2011'
        }).connect(function(err, server) {
            callback(err, this);
        });
    },
        destroy: function(db) {
        db.disconnect();
    }
});

var server = http.createServer(function(request, response) {  
    response.writeHead(200, {"Content-Type": "text/html"});  
    pool.acquire(function(err, db) {
        if (err) {
            return response.end("CONNECTION error: " + err);
        }

        db.query('SELECT * FROM tb_users').execute(function(err, rows, columns) {
            pool.release(db);

            if (err) {
                return response.end("QUERY ERROR: " + err);
            }
            response.write(rows.length + ' ROWS found using node.js<br />');
            response.end(rows[0]["username"]);
        });
    });   
});

cluster(server)
  .set('workers', 5)
  .listen(8080);

This this is the code i used for PHP (Lighttpd + FastCGI)

<?php
  $conn = new mysqli('localhost', 'root', 'password', 'v3edb2011');
  if($conn) {
    $result = $conn->query('SELECT * FROM tb_users ORDER BY RAND()');
    if($result) {
      echo ($result->num_rows).' ROWS found using Lighttpd + PHP (FastCGI)<br />';
      $row = $result->fetch_assoc();
      echo $row['username'];
    } else {
      echo 'Error : DB Query';
    }
  } else {
    echo 'Error : DB Connection';
  }
?>

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

余生再见 2024-12-14 21:02:53

这是一个糟糕的基准比较。在 Node.js 中,您选择整个表并将其放入数组中。在 php 中,你只解析第一行。所以你的表越大,节点看起来就越慢。如果你让 php 使用 mysqli_fetch_all ,这将是一个类似的比较。虽然 db-mysql 应该很快,但它的功能不是很全面,并且缺乏进行公平比较的能力。使用不同的node.js模块(例如node-mysql-libmysqlclient)应该允许您仅处理第一行。

This is a bad benchmark comparison. In node.js your selecting the whole table and putting it in an array. In php your only parsing the first row. So the bigger your table is the slower node will look. If you made php use mysqli_fetch_all it would be a similar comparison. While db-mysql is supposed to be fast it's not very full featured and lacks the ability to make this a fair comparison. Using a different node.js module like node-mysql-libmysqlclient should allow you to only process the first row.

梦里兽 2024-12-14 21:02:53

100 个连接是 MySQL 最大连接数的默认设置。

因此,不知何故,您的连接不会被重复用于不同的请求。也许您已经在每个连接上运行一个查询。

也许您正在使用的nodejs MySQL库不会在同一MySQL连接上对查询进行排队,而是尝试打开其他连接并失败。

100 connections is the default setting for MySQL maximum number of connections.

So somehow your connections aren't being reused for different requests. Probably you already have one query running on each connection.

Maybe the nodejs MySQL library you are using will not queue queries on the same MySQL connection but try to open an other connection and fail.

情归归情 2024-12-14 21:02:53

如果我错了,请纠正我,但我觉得你忽略了一些东西:Node 使用单个进程来处理每个请求(并通过事件处理它们,仍然是相同的进程),而 php 为每个请求获取一个新进程(线程)要求。

这样做的问题是,来自节点的一个进程粘在 CPU 的一个核心上,而 PHP 通过多线程来扩展所有四个核心。我想说,使用四核 2.x GHz 处理器,PHP 肯定比 Node 具有显着优势,因为它能够利用额外的资源。

还有另一个讨论提供了有关如何在多个核心上扩展 Node 的一些信息,但这必须通过编码明确完成。再次,如果我错了,请纠正我,但我在上面的示例中没有看到任何此类代码。

我自己对 Node 还很陌生,但我希望这可以帮助您改进测试:)

Correct me if I'm wrong, but I feel like you are overlooking something: Node uses a single process to handle every request (and handles them through events, still the same process), while php gets a new process (thread) for every request.

The problem with this is that the one process from node sticks to one core of the CPU, and PHP gets to scale with all four cores through multi-threading. I would say that with a Quad Core 2.x GHz processor, PHP would definitely have a significant advantage over Node just through being able to utilize the extra resources.

There is another discussion giving some information about how to scale Node over multiple cores, but that has to be done explicitly through coding. Again, correct me if I'm wrong, but I don't see any such code in the example above.

I'm pretty new to Node myself, but I hope this helps you improve your test :)

愿与i 2024-12-14 21:02:53

您是否启用了 PHP 的 APC?

您可以尝试启用 PHP 的持久连接吗?
例如

$conn = new mysqli('p:localhost', 'root', 'password', 'v3edb2011');

Have you enabled APC with PHP?

Can you try to enable persistent connections with PHP?
e.g.

$conn = new mysqli('p:localhost', 'root', 'password', 'v3edb2011');
我早已燃尽 2024-12-14 21:02:53

您不是在 Node.js 中使用 10 个最大 MySQL 连接,而在 PHP 中使用 5000 个最大 MySQL 连接吗?

当您在任一系统上运行测试时,我会看一下 MySQL 的“SHOW FULL PROCESSLIST”。

Aren't you using 10 maximum MySQL connections in Node.js, and 5000 maximum MySQL connections via PHP?

While you run your tests on either system, I would take a look at MySQL's "SHOW FULL PROCESSLIST".

白芷 2024-12-14 21:02:53

需要考虑的一件事是驱动程序 - 数据库的性能可能与您正在使用的特定驱动程序密切相关。最流行的 mysql 驱动程序也是最积极维护的驱动程序是 https://github.com/felixge/节点-mysql。可能会得到不同的结果。

但如果您卡在 100 个连接,听起来连接没有正确关闭。我可能会在池销毁事件中添加一条 console.log 语句,以确保它确实正在执行。

One thing to consider is the driver - performance to databases can be very tied into the specific driver you are using. The most popular mysql driver and the one that is the most actively maintained is https://github.com/felixge/node-mysql. Might get different results with that.

But if you are stuck at 100 connections, sounds like connections are not being properly closed. I might add a console.log statement in the pools destroy event to make sure it really is executing.

泪眸﹌ 2024-12-14 21:02:53

这是一个糟糕的基准测试,它应该是一个简单的“hello world”,因为数千个基准测试证明nodejs是有史以来最快的“hello world”服务器:D

This is a bad benchmark, it should be a simple "hello world", as thousands of benchmarks that prove that nodejs is the "hello world" server fastest of all time :D

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文