如何解决内存错误?我应该增加内存限制吗?

发布于 2025-02-12 17:14:58 字数 324 浏览 1 评论 0 原文

我正在加载它,但说明了错误。

import pandas as pd
import numpy as np

userMovie = np.load('userMovieMatrixAction.npy')

numberUsers, numberGenreMovies = userMovie.shape

genreFilename = 'Action.csv'
genre = pd.read_csv(genreFilename)

MemoryError:无法分配3.63 GIB的形状(487495360)和数据类型Float64 我能做些什么?这让我发疯。

I was loading this but it says error.

import pandas as pd
import numpy as np

userMovie = np.load('userMovieMatrixAction.npy')

numberUsers, numberGenreMovies = userMovie.shape

genreFilename = 'Action.csv'
genre = pd.read_csv(genreFilename)

MemoryError: Unable to allocate 3.63 GiB for an array with shape (487495360,) and data type float64
What can I do? It's driving me crazy.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

初心 2025-02-19 17:14:58

如果程序用完了内存,则似乎是 usercommit处理 。如果您在Linux中,则可以尝试运行以下命令以启用“始终过度使用”模式,这可以帮助您加载3.63GIB NPY文件,其中> numpy

$ echo 1 > /proc/sys/vm/overcommit_memory

If the program runs out of memory, it seems like an issue with overcommit handling of your operative system. If you are in Linux, you can try to run the following command to enable "always overcommit" mode, which can help you load the 3.63GiB npy file with numpy:

$ echo 1 > /proc/sys/vm/overcommit_memory
梓梦 2025-02-19 17:14:58
  1. 首先,您不需要float64您可以使用可以在此处找到的铸造熊猫功能将其保存为float32
    ..即使您在神经网络上工作,您也可以尝试将其保存为float32,您将以这种方式减少字节足迹。
  2. 您可能需要构建一个发电机,以使数据在需要时在内存中不完全在内存中加载到内存中。
    链接到Python指南
    https://docs.python.org/3/howto/3/howto/howto/howto/functional.html#发电机
    Kaggle笔记本使用发电机
    https://www.kaggle.com/code/vboodshelf/python-generators-to-reduce-ram-usage-part-part-part-part-2/notebook

您可以尽我所能构建一个建议系统看 ....

  1. First things first you don't need float64 you can save this as float32 using the casting pandas function you can find here
    https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html ..even if you were working on a neural network you would try to save as float32, you will reduce your byte footprint in this way.
  2. You may need to build a generator to allow the data to be loaded into memory when needed in chunks not entirely in memory as you are trying to do.
    links to Python guide
    https://docs.python.org/3/howto/functional.html#generators
    Kaggle notebook using generators
    https://www.kaggle.com/code/vbookshelf/python-generators-to-reduce-ram-usage-part-2/notebook

You are starting to build a recommender system as far I can see ....

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文