分层字典(减少内存足迹或使用数据库)

发布于 2025-02-05 23:22:02 字数 595 浏览 3 评论 0原文

我正在使用极高的尺寸生物计数数据(单细胞RNA测序,行是细胞ID,列是基因)。

每个数据集是一个单独的平面文件(AnnData格式)。每个平面文件都可以被各种元数据属性分解,包括细胞类型(例如:肌肉细胞,心脏细胞),亚型(例如:可以将肺数据集分为正常的肺和癌性肺),癌症阶段(例如:阶段:阶段1,阶段2)等。

目标是预先计算特定元数据列,子组,数据集,细胞类型,基因组合的聚合指标,并保持该易于访问,以便当一个人对我的Web应用程序查询我的网络应用程序时,图,我可以快速检索结果(请参阅下面的图,以了解我要创建的内容)。我已经生成了Python代码来组装下面的字典,它加剧了我可以创建可视化的速度。

现在唯一的问题是,该字典的内存足迹很高(每个数据集约有10,000个基因)。减少该词典的记忆足迹的最佳方法是什么?或者,我应该考虑另一个存储框架(简短地看到了所谓的redis哈希)吗?

I am working with extremely high dimensional biological count data (single cell RNA sequencing where rows are cell ID and columns are genes).

Each dataset is a separate flat file (AnnData format). Each flat file can be broken down by various metadata attributes, including by cell type (eg: muscle cell, heart cell), subtypes (eg: a lung dataset can be split into normal lung and cancerous lung), cancer stage (eg: stage 1, stage 2), etc.

The goal is to pre-compute aggregate metrics for a specific metadata column, sub-group, dataset, cell-type, gene combination and keep that readily accessible such that when a person queries my web app for a plot, I can quickly retrieve results (refer to Figure below to understand what I want to create). I have generated Python code to assemble the dictionary below and it has sped up how quickly I can create visualizations.

Only issue now is that the memory footprint of this dictionary is very high (there are ~10,000 genes per dataset). What is the best way to reduce the memory footprint of this dictionary? Or, should I consider another storage framework (briefly saw something called Redis Hashes)?

Hierarchical Dictionary

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

黑寡妇 2025-02-12 23:22:02

减少内存足迹的一种选项,但保持快速查找是使用HDF5文件作为数据库。这将是一个存在于您的磁盘上而不是内存上的一个大文件,但结构与您的嵌套词典相同,并仅通过阅读所需的数据来允许快速查找。编写文件会很慢,但是您只需要这样做一次,然后上传到您的Web应用程序。

为了测试这个想法,我以您共享的图表的格式创建了两个测试嵌套词典。小的具有1E5元数据/组/数据集/cellType/Gene条目,另一个大于10倍。

将小dict写给HDF5花费了约2分钟,并导致文件大小为140 MB,而较大的dict-dataset花费了约14分钟的时间才能写入HDF5,并且是1.4 GB文件。

查询小型和大型HDF5文件相似的时间量,表明查询可以很好地扩展到更多数据。

这是我用来创建测试dict-datasets的代码,写入HDF5,然后查询

import h5py
import numpy as np
import time

def create_data_dict(level_counts):
    """
    Create test data in the same nested-dict format as the diagram you show
    The Agg_metric values are random floats between 0 and 1
    (you shouldn't need this function since you already have real data in dict format)
    """
    if not level_counts:
        return {f'Agg_metric_{i+1}':np.random.random() for i in range(num_agg_metrics)}
    
    level,num_groups = level_counts.popitem()
    return {f'{level}_{i+1}':create_data_dict(level_counts.copy()) for i in range(num_groups)}


def write_dict_to_hdf5(hdf5_path,d):
    """
    Write the nested dictionary to an HDF5 file to act as a database
    only have to create this file once, but can then query it any number of times
    (unless the data changes)
    """
    def _recur_write(f,d):
        for k,v in d.items():
            #check if the next level is also a dict
            sk,sv = v.popitem()
            v[sk] = sv
            
            if type(sv) == dict:
                #this is a 'node', move on to next level
                _recur_write(f.create_group(k),v)
            else:
                #this is a 'leaf', stop here
                leaf = f.create_group(k)
                for sk,sv in v.items():
                    leaf.attrs[sk] = sv
        
    with h5py.File(hdf5_path,'w') as f:
        _recur_write(f,d)
        
        
def query_hdf5(hdf5_path,search_terms):
    """
    Query the hdf5_path with a list of search terms
    The search terms must be in the order of the dict, and have a value at each level
    Output is a dict of agg stats
    """
    with h5py.File(hdf5_path,'r') as f:
        k = '/'.join(search_terms)
        try:
            f = f[k]
        except KeyError:
            print('oh no! at least one of the search terms wasnt matched')
            return {}
                       
        return dict(f.attrs)

################
#     start    #
################
#this "small_level_counts" results in an hdf5 file of size 140 MB (took < 2 minutes to make)
#all possible nested dictionaries are made,
#so there are 40*30*10*3*3 = ~1e5 metadata/group/dataset/celltype/gene entries
num_agg_metrics = 7
small_level_counts = {
    'Gene':40,
    'Cell_Type':30,
    'Dataset':10,
    'Unique_Group':3,
    'Metadata':3,
}

#"large_level_counts" results in an hdf5 file of size 1.4 GB (took 14 mins to make)
#has 400*30*10*3*3 = ~1e6 metadata/group/dataset/celltype/gene combinations
num_agg_metrics = 7
large_level_counts = {
    'Gene':400,
    'Cell_Type':30,
    'Dataset':10,
    'Unique_Group':3,
    'Metadata':3,
}

#Determine which test dataset to use
small_test = True
if small_test:
    level_counts = small_level_counts
    hdf5_path = 'small_test.hdf5'
else:
    level_counts = large_level_counts
    hdf5_path = 'large_test.hdf5'


np.random.seed(1)
start = time.time()
data_dict = create_data_dict(level_counts)
print('created dict in {:.2f} seconds'.format(time.time()-start))

start = time.time()
write_dict_to_hdf5(hdf5_path,data_dict)
print('wrote hdf5 in {:.2f} seconds'.format(time.time()-start))

#Search terms in order of most broad to least
search_terms = ['Metadata_1','Unique_Group_3','Dataset_8','Cell_Type_15','Gene_17']

start = time.time()
query_result = query_hdf5(hdf5_path,search_terms)
print('queried in {:.2f} seconds'.format(time.time()-start))

direct_result = data_dict['Metadata_1']['Unique_Group_3']['Dataset_8']['Cell_Type_15']['Gene_17']

print(query_result == direct_result)

One option to reduce your memory footprint but keep fast lookup is to use an hdf5 file as a database. This will be a single large file that lives on your disk instead of memory, but is structured the same way as your nested dictionaries and allows for rapid lookups by reading in only the data you need. Writing the file will be slow, but you only have to do it once and then upload to your web-app.

To test this idea, I've created two test nested dictionaries in the format of the diagram you shared. The small one has 1e5 metadata/group/dataset/celltype/gene entries, and the other is 10 times larger.

Writing the small dict to hdf5 took ~2 minutes and resulted in a file 140 MB in size while the larger dict-dataset took ~14 minutes to write to hdf5 and is a 1.4 GB file.

Querying the small and large hdf5 files similar amounts of time showing that the queries scale well to more data.
enter image description here

Here's the code I used to create the test dict-datasets, write to hdf5, and query

import h5py
import numpy as np
import time

def create_data_dict(level_counts):
    """
    Create test data in the same nested-dict format as the diagram you show
    The Agg_metric values are random floats between 0 and 1
    (you shouldn't need this function since you already have real data in dict format)
    """
    if not level_counts:
        return {f'Agg_metric_{i+1}':np.random.random() for i in range(num_agg_metrics)}
    
    level,num_groups = level_counts.popitem()
    return {f'{level}_{i+1}':create_data_dict(level_counts.copy()) for i in range(num_groups)}


def write_dict_to_hdf5(hdf5_path,d):
    """
    Write the nested dictionary to an HDF5 file to act as a database
    only have to create this file once, but can then query it any number of times
    (unless the data changes)
    """
    def _recur_write(f,d):
        for k,v in d.items():
            #check if the next level is also a dict
            sk,sv = v.popitem()
            v[sk] = sv
            
            if type(sv) == dict:
                #this is a 'node', move on to next level
                _recur_write(f.create_group(k),v)
            else:
                #this is a 'leaf', stop here
                leaf = f.create_group(k)
                for sk,sv in v.items():
                    leaf.attrs[sk] = sv
        
    with h5py.File(hdf5_path,'w') as f:
        _recur_write(f,d)
        
        
def query_hdf5(hdf5_path,search_terms):
    """
    Query the hdf5_path with a list of search terms
    The search terms must be in the order of the dict, and have a value at each level
    Output is a dict of agg stats
    """
    with h5py.File(hdf5_path,'r') as f:
        k = '/'.join(search_terms)
        try:
            f = f[k]
        except KeyError:
            print('oh no! at least one of the search terms wasnt matched')
            return {}
                       
        return dict(f.attrs)

################
#     start    #
################
#this "small_level_counts" results in an hdf5 file of size 140 MB (took < 2 minutes to make)
#all possible nested dictionaries are made,
#so there are 40*30*10*3*3 = ~1e5 metadata/group/dataset/celltype/gene entries
num_agg_metrics = 7
small_level_counts = {
    'Gene':40,
    'Cell_Type':30,
    'Dataset':10,
    'Unique_Group':3,
    'Metadata':3,
}

#"large_level_counts" results in an hdf5 file of size 1.4 GB (took 14 mins to make)
#has 400*30*10*3*3 = ~1e6 metadata/group/dataset/celltype/gene combinations
num_agg_metrics = 7
large_level_counts = {
    'Gene':400,
    'Cell_Type':30,
    'Dataset':10,
    'Unique_Group':3,
    'Metadata':3,
}

#Determine which test dataset to use
small_test = True
if small_test:
    level_counts = small_level_counts
    hdf5_path = 'small_test.hdf5'
else:
    level_counts = large_level_counts
    hdf5_path = 'large_test.hdf5'


np.random.seed(1)
start = time.time()
data_dict = create_data_dict(level_counts)
print('created dict in {:.2f} seconds'.format(time.time()-start))

start = time.time()
write_dict_to_hdf5(hdf5_path,data_dict)
print('wrote hdf5 in {:.2f} seconds'.format(time.time()-start))

#Search terms in order of most broad to least
search_terms = ['Metadata_1','Unique_Group_3','Dataset_8','Cell_Type_15','Gene_17']

start = time.time()
query_result = query_hdf5(hdf5_path,search_terms)
print('queried in {:.2f} seconds'.format(time.time()-start))

direct_result = data_dict['Metadata_1']['Unique_Group_3']['Dataset_8']['Cell_Type_15']['Gene_17']

print(query_result == direct_result)
残花月 2025-02-12 23:22:02

尽管Python词典本身在内存使用方面相当有效,但您可能正在存储您用作字典键的字符串的多个副本。从您对数据结构的描述来看,您可能有10000份“ AGG Metric 1”,“ AGG Metric 2”等副本,例如数据集中的每个基因。这些重复的字符串可能会占据大量内存。这些可以用sys.inten重复重复,以便尽管您对字典中的字符串的引用仍然很多,但它们都指向内存中的一个副本。您只需要通过简单地将分配更改为data [sys.intern('agg metric 1')] = value来对代码进行最小调整。我会为您的字典层次结构各个级别使用的所有键执行此操作。

Although Python dictionaries themselves are fairly efficient in terms of memory usage you are likely storing multiple copies of the strings you are using as dictionary keys. From your description of your data structure it is likely that you have 10000 copies of “Agg metric 1”, “Agg metric 2”, etc for every gene in your dataset. It is likely that these duplicate strings are taking up a significant amount of memory. These can be deduplicated with sys.inten so that although you still have as many references to the string in your dictionary, they all point to a single copy in memory. You would only need to make a minimal adjustment to your code by simply changing the assignment to data[sys.intern(‘Agg metric 1’)] = value. I would do this for all of the keys used at all levels of your dictionary hierarchy.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文