如何计算大数据的fft,避免内存耗尽?
使用16GB内存计算fft
,导致内存耗尽。
print(data_size)
freqs, times, spec_arr = signal.spectrogram(data, fs=samp_rate,nfft=1024,return_onesided=False,axis=0,scaling='spectrum',mode='magnitude')
输出如下:
537089518
Killed
如何使用现有的python包计算大尺寸数据的fft?
Calculate fft
with 16GB memory,cause memory exhausted.
print(data_size)
freqs, times, spec_arr = signal.spectrogram(data, fs=samp_rate,nfft=1024,return_onesided=False,axis=0,scaling='spectrum',mode='magnitude')
Output as below:
537089518
Killed
How to calculate fft of large size data ,with existing python package?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
更通用的解决方案是您自己执行此操作。得益于众所周知的 Cooley–Tukey FFT,一维 FFT 可以拆分为更小的 FFT算法和多维分解。有关此策略的更多信息,请阅读FFTW3 的设计和实现 。您可以在虚拟映射内存中执行操作,以便更轻松地执行此操作。一些库/包(例如 FFTW)使您能够相对轻松地执行快速就地 FFT。您可能需要编写自己的 Python 包或使用 Cython,以免分配未内存映射的额外内存。
一种替代解决方案是将数据保存在 HDF5 中(例如使用 h5py,然后使用 out_of_core_fft ,然后再次读取该文件,但是请注意,这个包有点旧,并且似乎不再维护了。
A more general solution is to do that yourself. 1D FFTs can be split in smaller ones thanks to the well-known Cooley–Tukey FFT algorithm and multidimentional decomposition. For more information about this strategy, please read The Design and Implementation of FFTW3. You can do the operation in virtually mapped memory so to do that more easily. Some library/package like the FFTW enable you to relatively-easily perform fast in-place FFTs. You may need to write your own Python package or to use Cython so not to allocate additional memory that is not memory mapped.
One alternative solution is to save your data in HDF5 (for example using h5py, and then use out_of_core_fft and then read again the file. But, be aware that this package is a bit old and appear not to be maintained anymore.