减少数据库打开连接的选项?

发布于 2024-12-12 00:09:32 字数 2644 浏览 0 评论 0原文

我有一个非常常用的免费应用程序,时不时会有大约 500 到 1000 个并发用户。

该应用程序是一个桌面应用程序,它将与我的网站 API 进行通信,每 5 ~ 15 分钟接收一次数据,并每 15 分钟发回约 3 个顶部选项的最少数据。

由于用户可以打开和关闭应用程序,因为他们希望每个应用程序查询我的 API 的计时器可能会有所不同,因此我已经达到了我的托管计划可用的最大连接限制。

不想因为财务问题而升级它,也因为它目前是一个非盈利应用程序我正在寻找其他选项来减少连接量并缓存一些可以缓存的信息。

我想到的第一件事是将 FastCGI 与 Perl 结合使用,我已经对其进行了一段时间的测试,它似乎工作得很好,但我在使用它时遇到了问题:

  1. 如果由于某种原因应用程序空闲了 60这 服务器杀死它,对于接下来的几个请求,它将回复 错误 500,直到脚本重新生成,这大约需要 3 分钟以上 (是的,这需要那么多时间,我已经在自己的测试中在本地尝试了我的代码 服务器,它立即出现,所以我确定这是服务器问题 我的托管公司,但他们似乎不想解决 它)。

  2. 终止超时设置为 300 并将终止/重新启动 该时间段之后的脚本将导致上述 1) 关于脚本的重生。

鉴于我现在正在寻找不基于 FastCGI 的替代方案(如果有的话)。 另外,由于共享主机的限制,我无法创建自己的守护进程,并且我编译任何内容的权限也非常有限。

有什么好的选择可以让我用 Perl 或 PHP 归档它吗?

主要是将数据库打开连接减少到最低限度,并且仍然能够缓存一些选择查询以返回数据...主要过程无论如何,应用程序正在插入/更新数据,因此需要缓存很多内容。

这是我用来测试它的简单代码:

#!/usr/bin/perl -w

use CGI::Simple; # Can't use CGI as it doesn't clear the data for the 
                 # next request haven't investigate it further but needed 
                 # something working to test and using CGI::Simples was 
                 # the fastest solution found.
use DBI;
use strict;
use warnings;

use lib qw( /home/my_user/perl_modules/lib/perl/5.10.1 );
use FCGI;

my $dbh = DBI->connect('DBI:mysql:mydatabase:mymysqlservername',
                       'username', 'password', 
                       {RaiseError=>1,AutoCommit=>1}
                      ) || die &dbError($DBI::errstr);

my $request = FCGI::Request();
while($request->Accept() >= 0)
{
    my $query   = new CGI::Simple;
    my $action  = $query->param("action");
    my $id      = $query->param("id");
    my $server  = $query->param("server");
    my $ip      = $ENV{'REMOTE_ADDR'};

    print $query->header();

    if ($action eq "exp")
    {
        my $sth = $dbh->prepare(qq{
                            INSERT INTO 
                               my_data (id, server) VALUES (?,INET_ATON(?))
                            ON DUPLICATE KEY UPDATE
                               server = INET_ATON(?)});
        my $result = $sth->execute($id, $server, $server)
                             || die print($dbh->errstr);
        $sth->finish;
        if ($result)
        {
            print "1";
        }
        else
        {
            print "0";
        }
    }
    else
    {
        print "0";
    }
}

$dbh->disconnect || die print($DBI::errstr);
exit(0);

sub dbError
{
    my ($txt_erro) = @_;
    my $query = new CGI::Simple;
    print $query->header();
    print "$txt_erro";
    exit(0);
}

I have a free application that is very used and I get around 500 to 1000 concurrent users from time to time.

This application is a desktop application that will communicate with my website API to receive data every 5 ~ 15 minutes as well as send back minimum data about 3 selects top every 15 minutes.

Since users can turn the application on and off as they wish the timer for each one of them to query my API may vary and as such I have been hitting the max connection limit available for my hosting plan.

Not wanting to upgrade it for financial matter as well as because it is a non-profitable application for the moment I am searching for other options to reduce the amount of connections and cache some information that can be cached.

The first thing that came to my mind was to use FastCGI with Perl I have tested it for some time now and it seems to work great but I have to problems while using it:

  1. if for whatever reason the application goes idle for 60 the
    server kills it and for the next few requests it will reply with
    error 500 until the script is respawned which takes about 3+ minutes
    (yes it takes that much I have tried my code locally on my own test
    server and it comes up instantly so I am sure it is a server issue
    of my hosting company but they don't seem like wanting to resolve
    it).

  2. the kill timeout which is set to 300 and will kill/restart the
    script after that period which would result on the above said at 1)
    about the respawn of the script.

Given that I am now looking for alternatives that are not based on FastCGI if there is any.
Also due to the limitations of the shared host I can't make my own daemon and my access to compile anything is very limited.

Are there any good options that I can archive this with either Perl or PHP ?

Mainly reduce the database open connections to a minimum and still be able to cache some select queries for returning data... The main process of the application is inserting/updating data anyway so there inst much to cache.

This was the simple code I was using for testing it:

#!/usr/bin/perl -w

use CGI::Simple; # Can't use CGI as it doesn't clear the data for the 
                 # next request haven't investigate it further but needed 
                 # something working to test and using CGI::Simples was 
                 # the fastest solution found.
use DBI;
use strict;
use warnings;

use lib qw( /home/my_user/perl_modules/lib/perl/5.10.1 );
use FCGI;

my $dbh = DBI->connect('DBI:mysql:mydatabase:mymysqlservername',
                       'username', 'password', 
                       {RaiseError=>1,AutoCommit=>1}
                      ) || die &dbError($DBI::errstr);

my $request = FCGI::Request();
while($request->Accept() >= 0)
{
    my $query   = new CGI::Simple;
    my $action  = $query->param("action");
    my $id      = $query->param("id");
    my $server  = $query->param("server");
    my $ip      = $ENV{'REMOTE_ADDR'};

    print $query->header();

    if ($action eq "exp")
    {
        my $sth = $dbh->prepare(qq{
                            INSERT INTO 
                               my_data (id, server) VALUES (?,INET_ATON(?))
                            ON DUPLICATE KEY UPDATE
                               server = INET_ATON(?)});
        my $result = $sth->execute($id, $server, $server)
                             || die print($dbh->errstr);
        $sth->finish;
        if ($result)
        {
            print "1";
        }
        else
        {
            print "0";
        }
    }
    else
    {
        print "0";
    }
}

$dbh->disconnect || die print($DBI::errstr);
exit(0);

sub dbError
{
    my ($txt_erro) = @_;
    my $query = new CGI::Simple;
    print $query->header();
    print "$txt_erro";
    exit(0);
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

瘫痪情歌 2024-12-19 00:09:32

运行代理。 Perl 的 DBD::Proxy 应该符合要求。代理服务器不应受到主机的控制,因此其 60-???-不活动规则不应适用于此。

或者,安装一个比 FastCGI 超时运行更频繁的 cron 作业,只是为了在您的站点上获取一些“制作活动”页面,然后丢弃输出。例如,某些 CRM 这样做是为了强制“检查更新”,因此这并不是完全不寻常,尽管这里有点烦人。

Run a proxy. Perl's DBD::Proxy should fit the bill. The proxy server shouldn't be under your host's control, so its 60-???-of-inactivity rule shouldn't apply here.

Alternatively, install a cron job that runs more often than the FastCGI timeout, simply to wget some "make activity" page on your site, and discard the output. Some CRMs do this to force a "check for updates" for example, so it's not completely unusual, though somewhat of an annoyance here.

痞味浪人 2024-12-19 00:09:32

FWIW,您可能想查看 CGI::Fast 而不是 CGI::Simple 来解决您的 CGI.pm 未按预期方式处理持久变量的问题...

FWIW, you probably want to look at CGI::Fast instead of CGI::Simple to resolve your CGI.pm not dealing in the expected manner with persistent variables...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文