appengine,python:taskqueue.add() 中是否存在内存泄漏?

发布于 2025-01-07 05:29:28 字数 4300 浏览 0 评论 0原文

以下代码添加了对 blobstore 中的文件执行某些处理的任务,它在 B2 后端上运行,因此没有超时限制:

for task in tasks:
    tools.debug("add_tasks_to_process_files", "adding_task")

    taskqueue.add(\
            name=("Process_%s_files---%s--%s--%s--%s" % \
                        (len(tasks[task]), task[1], task[0], task[2], int(time.time()))),\
            queue_name="files-processor",\
            url="/analytics/process_files/",\
            params={"processing_task": json.dumps({"profile": task, "blobs_to_process": tasks[task]})})

任务是以下形式的字典:

{
    (x1,y1,z1): ["blob_key", "blob_key"... (limited to 35 keys)],
    (x2,y2,z2): ["blob_key", "blob_key"...],
    .
    .
    .
}

x1、y1、z1 是所有字符串

tools.debug是我编写的一个函数,它使用urlfetch将消息发送到我的本地服务器(这样我就不必等待20分钟才能读取日志):

def debug(location, message, params=None, force=False):
    if not (settings.REMOTE_DEBUG or settings.LOCALE_DEBUG or force):
        return

    if params is None:
        params = {}

    params["memory"] = runtime.memory_usage().current()
    params["instance_id"] = settings.INSTANCE_ID

    debug_message = "%s/%s?%s" % (urllib2.quote(location), urllib2.quote(message), "&".join(["%s=%s" % (p, urllib2.quote(unicode(params[p]).encode("utf-8"))) for p in params]))

    if settings.REMOTE_DEBUG or force:
        fetch("%s/%s" % (settings.REMOTE_DEBUGGER, debug_message))

    if settings.LOCALE_DEBUG or force:
        logging.debug(debug_message)

因为tools.debug不在代码中第一次失败,我确信这不是内存问题的原因。

我收到此错误:

   /add_tasks_to_process_files/ 500 98812ms 0kb instance=0 AppEngine-Google; (+http://code.google.com/appengine):
    A serious problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application. (Error code 201)

就在它之后:

/_ah/stop 500 110ms 0kb
Exceeded soft private memory limit with 283.406 MB after servicing 1 requests total

再次,我收到了上面的代码,但没有行: tools.debug("add_tasks_to_process_files", "adding_task")

现在,让我向您展示我的内容在我的调试器中查看:

1 2012-1-19 14:41:38 [processors-backend] processors-backend-initiated instance_id: 1329662498, memory: 18.05078125, backend_instance_url: http://0.processors.razoss-dock-dev.appspot.com, backend_load_balancer_url: http://processors.razoss-dock-dev.appspot.com
2 2012-1-19 14:41:39 [AddTasksToProcessFiles] start instance_id: 1329662498, files_sent_to_processing_already_in_previous_failed_attempts: 0, memory: 19.3828125
3 2012-1-19 14:41:59 [AddTasksToProcessFiles] add_tasks_to_process_files-LOOP_END total_tasks_to_add: 9180, total_files_added_to_tasks: 9184, task_monitor.files_sent_to_processing: 0, total_files_on_tasks_dict: 9184, instance_id: 1329662498, memory: 56.52734375
4 2012-1-19 14:42:0 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.81640625
5 2012-1-19 14:42:0 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.81640625
6 2012-1-19 14:42:1 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.9375
7 2012-1-19 14:42:2 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.9375
8 2012-1-19 14:42:2 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 58.03125
.
.
.
2183 2012-1-19 14:53:45 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 280.66015625
2184 2012-1-19 14:53:45 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 280.66015625
2185 2012-1-19 14:53:45 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.0
2    186 2012-1-19 14:53:46 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.0
2187 2012-1-19 14:53:46 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.0
2188 2012-1-19 14:53:46 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.3828125

完整跟踪: http://pastebin.com/CcPDU6s7

taskqueue.add() 中是否存在内存泄漏?

谢谢

The following code adds tasks that perform some processing on files from the blobstore, it runs on a B2 backend so it has no timeout limit:

for task in tasks:
    tools.debug("add_tasks_to_process_files", "adding_task")

    taskqueue.add(\
            name=("Process_%s_files---%s--%s--%s--%s" % \
                        (len(tasks[task]), task[1], task[0], task[2], int(time.time()))),\
            queue_name="files-processor",\
            url="/analytics/process_files/",\
            params={"processing_task": json.dumps({"profile": task, "blobs_to_process": tasks[task]})})

tasks is a dictionary in the following form:

{
    (x1,y1,z1): ["blob_key", "blob_key"... (limited to 35 keys)],
    (x2,y2,z2): ["blob_key", "blob_key"...],
    .
    .
    .
}

x1, y1, z1 are all strings

tools.debug is a function I wrote that sends messages to my local sever using urlfetch (so I won't have to wait 20min to be able to read the logs):

def debug(location, message, params=None, force=False):
    if not (settings.REMOTE_DEBUG or settings.LOCALE_DEBUG or force):
        return

    if params is None:
        params = {}

    params["memory"] = runtime.memory_usage().current()
    params["instance_id"] = settings.INSTANCE_ID

    debug_message = "%s/%s?%s" % (urllib2.quote(location), urllib2.quote(message), "&".join(["%s=%s" % (p, urllib2.quote(unicode(params[p]).encode("utf-8"))) for p in params]))

    if settings.REMOTE_DEBUG or force:
        fetch("%s/%s" % (settings.REMOTE_DEBUGGER, debug_message))

    if settings.LOCALE_DEBUG or force:
        logging.debug(debug_message)

since tools.debug wasn't in the code when it first failed, I know for sure it isn't the cause for the memory problems.

I got this error:

   /add_tasks_to_process_files/ 500 98812ms 0kb instance=0 AppEngine-Google; (+http://code.google.com/appengine):
    A serious problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application. (Error code 201)

And right after it:

/_ah/stop 500 110ms 0kb
Exceeded soft private memory limit with 283.406 MB after servicing 1 requests total

again, I received it for the code above without the line: tools.debug("add_tasks_to_process_files", "adding_task")

Now, let me show you what I see in my debugger:

1 2012-1-19 14:41:38 [processors-backend] processors-backend-initiated instance_id: 1329662498, memory: 18.05078125, backend_instance_url: http://0.processors.razoss-dock-dev.appspot.com, backend_load_balancer_url: http://processors.razoss-dock-dev.appspot.com
2 2012-1-19 14:41:39 [AddTasksToProcessFiles] start instance_id: 1329662498, files_sent_to_processing_already_in_previous_failed_attempts: 0, memory: 19.3828125
3 2012-1-19 14:41:59 [AddTasksToProcessFiles] add_tasks_to_process_files-LOOP_END total_tasks_to_add: 9180, total_files_added_to_tasks: 9184, task_monitor.files_sent_to_processing: 0, total_files_on_tasks_dict: 9184, instance_id: 1329662498, memory: 56.52734375
4 2012-1-19 14:42:0 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.81640625
5 2012-1-19 14:42:0 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.81640625
6 2012-1-19 14:42:1 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.9375
7 2012-1-19 14:42:2 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 57.9375
8 2012-1-19 14:42:2 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 58.03125
.
.
.
2183 2012-1-19 14:53:45 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 280.66015625
2184 2012-1-19 14:53:45 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 280.66015625
2185 2012-1-19 14:53:45 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.0
2    186 2012-1-19 14:53:46 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.0
2187 2012-1-19 14:53:46 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.0
2188 2012-1-19 14:53:46 [add_tasks_to_process_files] adding_task instance_id: 1329662498, memory: 281.3828125

full trace: http://pastebin.com/CcPDU6s7

Is there a memory leak in taskqueue.add() ?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

放血 2025-01-14 05:29:28

虽然这不能回答您的特定问题,但您是否尝试过使用 Queue 批量添加任务?

http://code.google.com/appengine/docs/python /taskqueue/queues.html#Queue_add

您一次最多可以添加 100 个任务。

http://code.google.com/appengine/docs /python/taskqueue/overview-push.html#Quotas_and_Limits_for_Push_Queues

未经测试的代码。

queue = taskqueue.Queue(name="files-processor")
while tasks:
    queue.add(taskqueue.Task(...) for k,v in (tasks.popitem() for _ in range(min(len(tasks),100))))

如果您仍然想在其他地方使用任务,则必须稍微更改此结构(或制作副本)。

While this doesn't answer your particular question, have you tried Queue to add tasks in batches?

http://code.google.com/appengine/docs/python/taskqueue/queues.html#Queue_add

You can add up to 100 tasks at once.

http://code.google.com/appengine/docs/python/taskqueue/overview-push.html#Quotas_and_Limits_for_Push_Queues

Untested code.

queue = taskqueue.Queue(name="files-processor")
while tasks:
    queue.add(taskqueue.Task(...) for k,v in (tasks.popitem() for _ in range(min(len(tasks),100))))

If you still want to use tasks elsewhere you'd have to change this construct slightly (or make a copy).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文