如何在带有Tensorflow的Keras模型中执行FFT作为A层?

发布于 2025-02-06 23:04:02 字数 3634 浏览 3 评论 0原文

我试图通过TensorFlow在KERAS模型中执行FFT作为层。 我尝试了以下网络的简化版本,但是您可以看到FFT层正在删除输入的虚构部分,而不是给出预期的输出。谁能解释这里发生的事情?有更好的方法吗?注意:我正在使用TensorFlow 1.12.0。您可以看到它与下面的numpy方法有何不同:

import tensorflow 
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import matplotlib.pyplot as plt
tf.__version__
'1.12.0'
s = np.sin(np.linspace(0,4*3.14,64))
inputs = keras.layers.Input(shape=(None,1))
x = keras.layers.Lambda(lambda v: tf.spectral.fft(tf.cast(v,tf.complex64)))(inputs)
model = keras.Model(inputs=inputs,outputs=x)
y = model.predict(s.reshape(1,64,1))
y


array([[[ 0.        +0.j],
    [ 0.19804703+0.j],
    [ 0.3882484 +0.j],
    [ 0.5630694 +0.j],
    [ 0.71558434+0.j],
    [ 0.8397515 +0.j],
    [ 0.9306519 +0.j],
    [ 0.9846846 +0.j],
    [ 0.999709  +0.j],
    [ 0.97513   +0.j],
    [ 0.91192126+0.j],
    [ 0.81258684+0.j],
    [ 0.68106174+0.j],
    [ 0.5225565 +0.j],
    [ 0.3433501 +0.j],
    [ 0.15054196+0.j],
    [-0.0482299 +0.j],
    [-0.24509114+0.j],
    [-0.4322431 +0.j],
    [-0.6022718 +0.j],
    [-0.74844146+0.j],
    [-0.8649617 +0.j],
    [-0.94721645+0.j],
    [-0.99194735+0.j],
    [-0.9973822 +0.j],
    [-0.96330583+0.j],
    [-0.89106816+0.j],
    [-0.78353083+0.j],
    [-0.644954  +0.j],
    [-0.4808273 +0.j],
    [-0.29765266+0.j],
    [-0.10268652+0.j],
    [ 0.09634754+0.j],
    [ 0.2915648 +0.j],
    [ 0.47523174+0.j],
    [ 0.6400724 +0.j],
    [ 0.7795566 +0.j],
    [ 0.8881587 +0.j],
    [ 0.9615764 +0.j],
    [ 0.99690133+0.j],
    [ 0.992734  +0.j],
    [ 0.9492396 +0.j],
    [ 0.8681411 +0.j],
    [ 0.7526513 +0.j],
    [ 0.6073451 +0.j],
    [ 0.43797904+0.j],
    [ 0.25126243+0.j],
    [ 0.05459208+0.j],
    [-0.14424095+0.j],
    [-0.33735985+0.j],
    [-0.5171143 +0.j],
    [-0.67638326+0.j],
    [-0.8088573 +0.j],
    [-0.9092885 +0.j],
    [-0.9736983 +0.j],
    [-0.9995351 +0.j],
    [-0.9857753 +0.j],
    [-0.9329641 +0.j],
    [-0.8431935 +0.j],
    [-0.7200199 +0.j],
    [-0.56832266+0.j],
    [-0.39411137+0.j],
    [-0.2042874 +0.j],
    [-0.00637057+0.j]]], dtype=complex64)

np.fft.rfft(s)
array([-0.00308384+0.00000000e+00j,  0.02672711-6.26977476e-01j,
    3.00558863-3.15625216e+01j, -0.1749646 +1.19722520e+00j,
   -0.12854614+6.51698290e-01j, -0.11460713+4.60018771e-01j,
   -0.10826083+3.58242071e-01j, -0.10477286+2.93644521e-01j,
   -0.10263141+2.48314149e-01j, -0.10121581+2.14376326e-01j,
   -0.1002289 +1.87784215e-01j, -0.09951263+1.66227088e-01j,
   -0.09897615+1.48281277e-01j, -0.09856407+1.33017648e-01j,
   -0.09824097+1.19801769e-01j, -0.0979833 +1.08184304e-01j,
   -0.0977749 +9.78372194e-02j, -0.0976044 +8.85148156e-02j,
   -0.09746356+8.00289334e-02j, -0.09734636+7.22326372e-02j,
   -0.09724825+6.50091522e-02j, -0.09716581+5.82641686e-02j,
   -0.0970964 +5.19203650e-02j, -0.09703796+4.59134283e-02j,
   -0.09698889+4.01891074e-02j, -0.09694793+3.47009899e-02j,
   -0.09691409+2.94087961e-02j, -0.0968866 +2.42770460e-02j,
   -0.09686484+1.92739961e-02j, -0.09684836+1.43707750e-02j,
   -0.09683682+9.54066251e-03j, -0.09682999+4.75847143e-03j,
   -0.09682773+0.00000000e+00j])

请注意,模型摘要如下:

“模型:“模型”


层(type)输出形状param param#

input_1(inputlayer)[(none,none,none,none,1)] 0


lambda( lambda)(无,无,1)0

总参数:0 可训练的参数:0 不可训练的参数:0

但是,我已经在最新版本的TensorFlow(2.6.2)上看到了这一点 - 完全相同的结果。 在那里我使用了以下内容:

x = keras.layers.Lambda(lambda v: tf.signal.fft(tf.cast(v,tf.complex64)))(inputs)

注意:“信号”属性,而不是“频谱”。

Lambda层是否将允许将错误反向传播到先前的网络层?

我真的很想在Tensorflow 1.12.0上进行此操作,但是如果解决方案更好/必要的话,可以升级。

可以为解决此问题提供的任何信息都将不胜感激。

I am trying to perform an FFT as a layer in a keras model via tensorflow.
I have tried a reduced version of the network as follows, but you can see that the FFT layer is removing the imaginary portion of the input and not giving the expected output. Can anyone explain what is going on here? Is there a better approach? Note: I am using tensorflow 1.12.0. You can see how it differs from the numpy approach below:

import tensorflow 
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import matplotlib.pyplot as plt
tf.__version__
'1.12.0'
s = np.sin(np.linspace(0,4*3.14,64))
inputs = keras.layers.Input(shape=(None,1))
x = keras.layers.Lambda(lambda v: tf.spectral.fft(tf.cast(v,tf.complex64)))(inputs)
model = keras.Model(inputs=inputs,outputs=x)
y = model.predict(s.reshape(1,64,1))
y


array([[[ 0.        +0.j],
    [ 0.19804703+0.j],
    [ 0.3882484 +0.j],
    [ 0.5630694 +0.j],
    [ 0.71558434+0.j],
    [ 0.8397515 +0.j],
    [ 0.9306519 +0.j],
    [ 0.9846846 +0.j],
    [ 0.999709  +0.j],
    [ 0.97513   +0.j],
    [ 0.91192126+0.j],
    [ 0.81258684+0.j],
    [ 0.68106174+0.j],
    [ 0.5225565 +0.j],
    [ 0.3433501 +0.j],
    [ 0.15054196+0.j],
    [-0.0482299 +0.j],
    [-0.24509114+0.j],
    [-0.4322431 +0.j],
    [-0.6022718 +0.j],
    [-0.74844146+0.j],
    [-0.8649617 +0.j],
    [-0.94721645+0.j],
    [-0.99194735+0.j],
    [-0.9973822 +0.j],
    [-0.96330583+0.j],
    [-0.89106816+0.j],
    [-0.78353083+0.j],
    [-0.644954  +0.j],
    [-0.4808273 +0.j],
    [-0.29765266+0.j],
    [-0.10268652+0.j],
    [ 0.09634754+0.j],
    [ 0.2915648 +0.j],
    [ 0.47523174+0.j],
    [ 0.6400724 +0.j],
    [ 0.7795566 +0.j],
    [ 0.8881587 +0.j],
    [ 0.9615764 +0.j],
    [ 0.99690133+0.j],
    [ 0.992734  +0.j],
    [ 0.9492396 +0.j],
    [ 0.8681411 +0.j],
    [ 0.7526513 +0.j],
    [ 0.6073451 +0.j],
    [ 0.43797904+0.j],
    [ 0.25126243+0.j],
    [ 0.05459208+0.j],
    [-0.14424095+0.j],
    [-0.33735985+0.j],
    [-0.5171143 +0.j],
    [-0.67638326+0.j],
    [-0.8088573 +0.j],
    [-0.9092885 +0.j],
    [-0.9736983 +0.j],
    [-0.9995351 +0.j],
    [-0.9857753 +0.j],
    [-0.9329641 +0.j],
    [-0.8431935 +0.j],
    [-0.7200199 +0.j],
    [-0.56832266+0.j],
    [-0.39411137+0.j],
    [-0.2042874 +0.j],
    [-0.00637057+0.j]]], dtype=complex64)

np.fft.rfft(s)
array([-0.00308384+0.00000000e+00j,  0.02672711-6.26977476e-01j,
    3.00558863-3.15625216e+01j, -0.1749646 +1.19722520e+00j,
   -0.12854614+6.51698290e-01j, -0.11460713+4.60018771e-01j,
   -0.10826083+3.58242071e-01j, -0.10477286+2.93644521e-01j,
   -0.10263141+2.48314149e-01j, -0.10121581+2.14376326e-01j,
   -0.1002289 +1.87784215e-01j, -0.09951263+1.66227088e-01j,
   -0.09897615+1.48281277e-01j, -0.09856407+1.33017648e-01j,
   -0.09824097+1.19801769e-01j, -0.0979833 +1.08184304e-01j,
   -0.0977749 +9.78372194e-02j, -0.0976044 +8.85148156e-02j,
   -0.09746356+8.00289334e-02j, -0.09734636+7.22326372e-02j,
   -0.09724825+6.50091522e-02j, -0.09716581+5.82641686e-02j,
   -0.0970964 +5.19203650e-02j, -0.09703796+4.59134283e-02j,
   -0.09698889+4.01891074e-02j, -0.09694793+3.47009899e-02j,
   -0.09691409+2.94087961e-02j, -0.0968866 +2.42770460e-02j,
   -0.09686484+1.92739961e-02j, -0.09684836+1.43707750e-02j,
   -0.09683682+9.54066251e-03j, -0.09682999+4.75847143e-03j,
   -0.09682773+0.00000000e+00j])

Note that the model summary is as follows:

Model: "model"


Layer (type) Output Shape Param #

input_1 (InputLayer) [(None, None, 1)] 0


lambda (Lambda) (None, None, 1) 0

Total params: 0
Trainable params: 0
Non-trainable params: 0

However, I have seen this on a more recent version of tensorflow (2.6.2) - exact same result.
There I used the following:

x = keras.layers.Lambda(lambda v: tf.signal.fft(tf.cast(v,tf.complex64)))(inputs)

Note: the "signal" attribute, instead of the "spectral".

Is the lambda layer going to allow for backpropagation of error to prior network layers?

I would really like to get this working on tensorflow 1.12.0, but could upgrade, if that is better/necessary to fix.

Any information that can be provided to help solve this problem would be much appreciated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一梦等七年七年为一梦 2025-02-13 23:04:02

我的答案基于tf v2.5.0,其中tf.spectral.fft已替换为tf.signal.fft。根据 docs ,此功能

在输入的内部最大维度上计算一维离散傅立叶变换


由于输入的最后一个维度为1,因此该函数不计算该系列的FFT,而是针对系列中的每个单独的数字。看到数字的FFT是数字本身,您的输出将与您的输入完全相同。

您可以通过以下两行解决此问题

inputs = keras.layers.Input(shape=(None,))
y = model.predict(s.reshape(1,64))

My answer is based on TF v2.5.0, where tf.spectral.fft has been replaced with tf.signal.fft. According to the docs, this function

Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of input.

Since the last dimension of your inputs is 1, the function does not compute FFT for the series, but for each individual number in the series. Seeing that FFT of a number is the number itself, your output will be exactly the same as your input.

You can fix this by the following two lines

inputs = keras.layers.Input(shape=(None,))
y = model.predict(s.reshape(1,64))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文