MATLAB 块大小和内存管理

发布于 2024-08-19 00:59:31 字数 220 浏览 6 评论 0原文

我正在使用块处理方法来处理两个大矩阵之间的计算。

使用较大的块大小时,代码的速度会显着加快。但如果太大,就会出现内存不足错误。目前,我手动调整代码以找到给定输入的最大工作块大小。

我的问题:如何自动执行查找最大可能块大小的过程?

我尝试过将所有内容包装在 try/catch 块中,并使用逐渐变小的块大小进行循环,直到成功为止。我希望有一种更优雅或更惯用的方式。

I'm using a block processing approach to handle a calculation between two large matrices.

The code significantly speeds up when using a larger block size. But if I go too large, then I get an Out of Memory error. Currently I hand-tune my code to find the largest working block size for a given input.

My question: how can I automate the process of finding the largest possible block size?

I've toyed with wrapping everything in a try/catch block and looping with progressively smaller block sizes till it succeeds. I'm hoping there is a more elegant or idiomatic way.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

油焖大侠 2024-08-26 00:59:31

在进行块处理之前,您可以使用MEMORY 函数来查看已经使用了多少内存,以及还有多少内存可用于块处理可能需要创建的任何其他变量。如果您可以根据块大小来估计块处理步骤所需的内存总量,则可以在用完可用内存之前计算出块大小可以有多大。这可能说起来容易做起来难,因为我不知道你到底是如何进行块处理的。

这是一个简单的例子。我将首先清理工作区并创建 2 个大矩阵:

>> clear all
>> mat1 = zeros(8000);  %# An 8000-by-8000 matrix of doubles
>> mat2 = zeros(8000);  %# Another 8000-by-8000 matrix of doubles

现在,假设我知道我必须分配一个 N×N 双精度矩阵,这将需要 8* N*N 字节内存(每个双精度 8 字节)。我可以执行以下操作来了解可以将 N 设置为多大:

>> uV = memory  %# Get the memory statistics

uV = 

    MaxPossibleArrayBytes: 314990592
    MemAvailableAllArrays: 643969024
            MemUsedMATLAB: 1.2628e+009

>> maxN = floor(sqrt(uV.MaxPossibleArrayBytes/8))  %# Compute the maximum N

maxN =

        6274

>> mat3 = ones(maxN);    %# Works fine
>> mat3 = ones(maxN+1);  %# Tanks! Too large!
??? Out of memory. Type HELP MEMORY for your options.

如果您经常遇到内存不足的问题,您可以执行以下操作:

  • 使用 精度(或 整数类型)用于大型矩阵,而不是默认的 整数类型) mathworks.com/access/helpdesk/help/techdoc/ref/double.html" rel="nofollow noreferrer">双精度。
  • 请务必清除不需要的变量不再(特别是当它们很大时)。

Before doing the block processing, you can use the MEMORY function to see how much memory is already being used and how much is left available for any additional variables the block processing may need to create. If you can estimate the total amount of memory the block processing steps will need as a function of the block size, you can figure out how large the block size can be before you run out of available memory. This may be easier said than done, since I don't know exactly how you are doing the block processing.

Here's a simple example. I'll start by clearing the workspace and creating 2 large matrices:

>> clear all
>> mat1 = zeros(8000);  %# An 8000-by-8000 matrix of doubles
>> mat2 = zeros(8000);  %# Another 8000-by-8000 matrix of doubles

Now, let's say I know I will have to allocate an N-by-N matrix of doubles, which will require 8*N*N bytes of memory (8 bytes per double). I can do the following to find out how large I can make N:

>> uV = memory  %# Get the memory statistics

uV = 

    MaxPossibleArrayBytes: 314990592
    MemAvailableAllArrays: 643969024
            MemUsedMATLAB: 1.2628e+009

>> maxN = floor(sqrt(uV.MaxPossibleArrayBytes/8))  %# Compute the maximum N

maxN =

        6274

>> mat3 = ones(maxN);    %# Works fine
>> mat3 = ones(maxN+1);  %# Tanks! Too large!
??? Out of memory. Type HELP MEMORY for your options.

If you are routinely having trouble with running out of memory, here are a couple of things you can do:

  • Use single precision (or integer types) for large matrices instead of the default double precision.
  • Be sure to clear variables you don't need anymore (especially if they are large).
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文