多处理有更好的方法吗?

发布于 2025-01-19 00:42:07 字数 1136 浏览 3 评论 0原文

  1. 我正在使用 python 多重处理来处理文件。最后处理
    文件记录存储在字典 A 中,即 dict_A = {'file1_xx': '8-04-22', 'file2_xx': '8-04-22', 'file3_xx': '8-04-22' , 'file4_xx': '8-04-22'}
  2. 扫描文件目录,最后修改日期的文件名存储在字典测试。比较两个字典中记录的文件是否有新文件:即将每个文件的上次修改日期(即 file1_xx)与 dict_A 中的上次处理日期进行比较。如果文件的上次修改日期大于每个文件的上次处理日期,则有一个条件将更新 dict_A
  3. 我面临问题,因为处理文件后字典没有更新。
  4. 理想情况下,dict_A 应使用同一类别的每个文件的最新修改日期进行更新。然后通过 sqlalchemy 将这个 dict_A 上传到 PostgreSQL 数据库。
def compare_rec(i):
    a = dict_A[i]
    b = dict_test[i]
    if a >= b:
        print("none")
    else:
        lock.acquire()
        print("found")
        a = b
        lock.release()

def init(l):
    global lock
    lock = l

if __name__ == '__main__':
    file_cat=['a', 'b', 'c', 'd']
    dict_A={'a': '10', 'b': '10', 'c': '10', 'd': '10'}
    dict_test={'a': '11', 'b': '11', 'c': '11', 'd': '11'}
    l = multiprocessing.Lock()
    pool = multiprocessing.Pool(initializer=init, initargs=(l,))
    pool.map(compare_rec, file_cat)
    pool.close()
    pool.join()
  1. I am using python multiprocessing to process files. Last processed
    file record is stored in a dict A i.e. dict_A = {'file1_xx': '8-04-22', 'file2_xx': '8-04-22', 'file3_xx': '8-04-22', 'file4_xx': '8-04-22'}
  2. Files directory is scanned, filenames with last modified date are stored in dict_test. Files recorded in both dicts are compared for new files: i.e. compare each file last modified date i.e file1_xx against the last processed date in dict_A. There's a condition which will update the dict_A if the file last modified date is greater than last processed date per single file.
  3. I am facing issues as the dictionary is not updated after the files are processed.
  4. Ideally the dict_A should be updated with the latest modified date per file of same category. This dict_A is then uploaded to PostgreSQL db through sqlalchemy.
def compare_rec(i):
    a = dict_A[i]
    b = dict_test[i]
    if a >= b:
        print("none")
    else:
        lock.acquire()
        print("found")
        a = b
        lock.release()

def init(l):
    global lock
    lock = l

if __name__ == '__main__':
    file_cat=['a', 'b', 'c', 'd']
    dict_A={'a': '10', 'b': '10', 'c': '10', 'd': '10'}
    dict_test={'a': '11', 'b': '11', 'c': '11', 'd': '11'}
    l = multiprocessing.Lock()
    pool = multiprocessing.Pool(initializer=init, initargs=(l,))
    pool.map(compare_rec, file_cat)
    pool.close()
    pool.join()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

深白境迁sunset 2025-01-26 00:42:07

进程不共享变量。

在函数中,我将使用 returnfilenamedate 发送回主进程

if ...:
    return i, a  
else:
    return i, b  

主线程应该从所有进程获取结果

results = pool.map(compare_rec, file_cat)

,并且应该更新字典

dict_A.update(results)

完整代码:

import multiprocessing

def compare_rec(key):
    print('key:', key)
    
    a = dict_A[key]
    b = dict_test[key]
    
    if a >= b:
        print("none", key, a)
        return key, a   
    else:
        print("found:", key, b)
        return key, b
    
if __name__ == '__main__':

    file_cat  = ['a', 'b', 'c', 'd']
    dict_A    = {'a': '10', 'b': '10', 'c': '10', 'd': '10'}
    dict_test = {'a': '11', 'b': '11', 'c': '11', 'd': '11'}

    pool = multiprocessing.Pool()
    
    results = pool.map(compare_rec, file_cat)
    print(results)
    
    print('before:', dict_A)
    dict_A.update(results)
    print('after :', dict_A)
    
    pool.close()
    pool.join()

Processes don't share variables.

In function I would use return to send filename and date back to main process

if ...:
    return i, a  
else:
    return i, b  

main thread should get results from all processes

results = pool.map(compare_rec, file_cat)

and it should update dictonary

dict_A.update(results)

Full code:

import multiprocessing

def compare_rec(key):
    print('key:', key)
    
    a = dict_A[key]
    b = dict_test[key]
    
    if a >= b:
        print("none", key, a)
        return key, a   
    else:
        print("found:", key, b)
        return key, b
    
if __name__ == '__main__':

    file_cat  = ['a', 'b', 'c', 'd']
    dict_A    = {'a': '10', 'b': '10', 'c': '10', 'd': '10'}
    dict_test = {'a': '11', 'b': '11', 'c': '11', 'd': '11'}

    pool = multiprocessing.Pool()
    
    results = pool.map(compare_rec, file_cat)
    print(results)
    
    print('before:', dict_A)
    dict_A.update(results)
    print('after :', dict_A)
    
    pool.close()
    pool.join()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文