SQLAlchemy 滥用导致内存泄漏?

发布于 2024-10-30 15:13:42 字数 1149 浏览 1 评论 0原文

我的程序每隔几秒就会吸收一兆。我读到 python 在垃圾收集中看不到游标,所以我有一种感觉,我使用 pydbc 和 sqlalchemy 可能做错了什么,并且可能没有关闭某处的东西?

#Set up SQL Connection
def connect():
        conn_string = 'DRIVER={FreeTDS};Server=...;Database=...;UID=...;PWD=...'
        return pyodbc.connect(conn_string)

metadata = MetaData()
e = create_engine('mssql://', creator=connect)
c = e.connect()
metadata.bind = c
log_table = Table('Log', metadata, autoload=True)

...
atexit.register(cleanup)
#Core Loop
line_c = 0
inserts = []
insert_size = 2000
while True:
        #line = sys.stdin.readline()
        line = reader.readline()
        line_c +=1
        m = line_regex.match(line)
        if m:  
                fields = m.groupdict()
                ...
                inserts.append(fields)
                if line_c >= insert_size:
                        c.execute(log_table.insert(), inserts)
                        line_c = 0
                        inserts = []

我是否应该将元数据块或其一部分移动到插入块并关闭每次插入的连接?

编辑:
问:一切稳定吗? 在此处输入图像描述

答:仅当您认为 Linux 吹走了该过程时:-)(图表确实排除了缓冲区/缓存内存使用情况)

My program is sucking up a meg every few seconds. I read that python doesn't see curors in garbage collection, so I have a feeling that I might be doing something wrong with my use of pydbc and sqlalchemy and maybe not closing something somwhere?

#Set up SQL Connection
def connect():
        conn_string = 'DRIVER={FreeTDS};Server=...;Database=...;UID=...;PWD=...'
        return pyodbc.connect(conn_string)

metadata = MetaData()
e = create_engine('mssql://', creator=connect)
c = e.connect()
metadata.bind = c
log_table = Table('Log', metadata, autoload=True)

...
atexit.register(cleanup)
#Core Loop
line_c = 0
inserts = []
insert_size = 2000
while True:
        #line = sys.stdin.readline()
        line = reader.readline()
        line_c +=1
        m = line_regex.match(line)
        if m:  
                fields = m.groupdict()
                ...
                inserts.append(fields)
                if line_c >= insert_size:
                        c.execute(log_table.insert(), inserts)
                        line_c = 0
                        inserts = []

Should I maybe move the metadata block or part of it to the insert block and close the connection each insert?

Edit:
Q: Does it every stabilize?
enter image description here

A: Only if you count Linux blowing away the process :-) (Graph does exclude Buffers/Cache from Memory Usage)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

束缚m 2024-11-06 15:13:42

我不一定会责怪 SQLAlchemy。也有可能是底层驱动的问题。一般来说,内存泄漏很难检测到。无论如何,您应该在 SQLALchemy 邮件列表上询问核心开发人员 Michael Bayer 几乎在哪里回复
每个问题...也许有更好的机会在那里获得真正的帮助...

I would not necessarily blame SQLAlchemy. It could also be a problem of the underlaying driver. In general memory leaks are hard to detect. In any case you should ask on the SQLALchemy mailing list where the core developer Michael Bayer is responding on almost
every question...perhaps a better chance to get real help there...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文