Django:如何阻止长查询杀死数据库?

发布于 2024-08-03 04:03:31 字数 222 浏览 0 评论 0原文

我将 Django 1.1 与 Mysql 5.* 和 MyISAM 表一起使用。

我的一些查询可能会花费大量时间来查找数据集中的异常值。这些会锁定桌子并关闭网站。其他时候,似乎有些用户在请求完成之前取消了请求,并且某些查询将陷入“准备”阶段,锁定所有其他查询。

我将尝试找出所有的极端情况,但最好有一个安全网,这样网站就不会崩溃。

我该如何避免这种情况?我可以设置最大查询次数吗?

I'm using Django 1.1 with Mysql 5.* and MyISAM tables.

Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out.

I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.

How do I avoid this? Can I set maximum query times?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

落日海湾 2024-08-10 04:03:31

不幸的是,MySQL 不允许您通过简单的方法来避免这种情况。一种常见的方法基本上是编写一个脚本,每隔 X 秒检查一次所有正在运行的进程(基于您认为的“长”),并杀死它认为运行时间过长的进程。不过,您至少可以通过在 MySQL 中设置 log_slow_queries 来获得一些基本的诊断,这会将所有耗时超过 10 秒的查询写入日志。如果这对于您认为“慢”的目标来说太长,您可以将 long_query_time 设置为 10 以外的值来更改阈值。

Unfortunately MySQL doesn't allow you an easy way to avoid this. A common method is basically to write a script that checks all running processes every X seconds (based on what you think is "long") and kill ones it sees are running too long. You can at least get some basic diagnostics, however, by setting log_slow_queries in MySQL which will write all queries that take longer than 10 seconds into a log. If that's too long for what you regard as "slow" for your purposes, you can set long_query_time to a value other than 10 to change the threshold.

や莫失莫忘 2024-08-10 04:03:31

我正在做一个 Django 数据库复制应用程序,并且遇到同样的困境,如果网络延迟增加,跨 WAN 的查询有时似乎会挂起。

来自 http://code.activestate.com/recipes/576780/

秘诀 576780:超时为(几乎)任何可调用对象

创建任何可调用对象的时间限制版本。

例如,要将函数 f 限制为 t 秒,
首先创建一个限时版本的f

from timelimited import *

f_t = TimeLimited(f, t)

然后,不要调用 f(...),而是使用 f_t,例如

try:
    r = f_t(...)
except TimeLimitExpired:
    r = ...  # timed out

按以下方式使用它:

def _run_timed_query(cursor, log_msg, timeout, query_string, *query_args):
    """Run a timed query, do error handling and logging"""
    import sys
    import traceback
    from timelimited import *

    try:
        return TimeLimited(cursor.execute, timeout)(query_string, *query_args)
    except TimeLimitExpired:
        logger_ec.error('%s; Timeout error.' % log_msg)
        raise TimeLimitExpired
    except:
        (exc_type, exc_info, tb) = sys.exc_info()
        logger_ec.error('%s; %s.' % (log_msg, traceback.format_exception(exc_type, exc_info, None)[0]))
        raise exc_type

I'm doing a Django DB-replication app and have the same predicament, queries across a WAN can sometimes just seem to hang if the network latency increases.

From http://code.activestate.com/recipes/576780/

Recipe 576780: Timeout for (nearly) any callable

Create a time limited version of any callable.

For example, to limit function f to t seconds,
first create a time limited version of f.

from timelimited import *

f_t = TimeLimited(f, t)

Then, instead of invoking f(...), use f_t like

try:
    r = f_t(...)
except TimeLimitExpired:
    r = ...  # timed out

Use it the following way for example:

def _run_timed_query(cursor, log_msg, timeout, query_string, *query_args):
    """Run a timed query, do error handling and logging"""
    import sys
    import traceback
    from timelimited import *

    try:
        return TimeLimited(cursor.execute, timeout)(query_string, *query_args)
    except TimeLimitExpired:
        logger_ec.error('%s; Timeout error.' % log_msg)
        raise TimeLimitExpired
    except:
        (exc_type, exc_info, tb) = sys.exc_info()
        logger_ec.error('%s; %s.' % (log_msg, traceback.format_exception(exc_type, exc_info, None)[0]))
        raise exc_type
灯角 2024-08-10 04:03:31

It seems that the only reliable way to abort a query is the kill command. A less drastic measure is to close the connection (and reopen a new one); this will terminate queries as soon as they try to send some output to the client.

吃兔兔 2024-08-10 04:03:31

你知道查询的内容是什么吗?也许您可以优化 SQL 或在表上放置一些索引?

Do you know what the queries are? Maybe you could optimise the SQL or put some indexes on your tables?

悟红尘 2024-08-10 04:03:31

使用 InnoDB 表,它们会行锁定 而不是表锁定

Use InnoDB Tables, they do row-locking instead of table-locking.

温柔嚣张 2024-08-10 04:03:31

您不应该编写这样的查询,至少不应该针对实时数据库运行。 Mysql 有一个“慢查询”参数,你可以用它来识别那些让你丧命的查询。大多数时候,这些缓慢的查询要么是有问题的,要么可以通过定义一个或两个新索引来加速。

You shouldn't write queries like that, at least not to run against your live database. Mysql has a "slow queries" pararameter which you can use to identify the queries that are killing you. Most of the time, these slow queries are either buggy or can be speeded up by defining a new index or two.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文