矢量化的方式乘以乘以Numpy阵列中的特定轴(卷积层backprop)
我想知道如何可以将以下四倍的前循环矢量化(这是在卷积层中与Backprop一起使用的)。
W = np.ones((2, 2, 3, 8)) # just a toy example
dW = np.zeros(W.shape)
dZ = np.ones((10, 4, 4, 8))*2
# get the shapes: m = samples/images; H_dim = Height of image; W_dim = Width of image; 8 = Channels/filters
(m, H_dim, W_dim, C) = dZ.shape
dA_prev = np.zeros((10, 4, 4, 3))
# add symmetric padding of 2 around height and width borders with 0-values; shape is now: (10, 8, 8, 3)
dA_prev = np.pad(dA_prev,((0,0),(2,2),(2,2),(0,0)), mode='constant', constant_values = (0,0))
# loop over images
for i in range(m):
# loop over height
for h in range(H_dim):
# loop over width
for w in range(W_dim):
# loop over channels/filters
for c in range(C):
vert_start = 1 * h # 1 = stride, just as an example
vert_end = vert_start + 2 # 2 = vertical filter size, just as an example
horiz_start = 1 * w # 1 = stride
horiz_end = horiz_start + 2 # 2 = horizontal filter size, just as an example
dW[:,:,:,c] += dA_prev[i, vert_start:vert_end,horiz_start:horiz_end,:] * dZ[i, h, w, c]
dA_prev[i, vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] # dZ[i, h, w, c] is a scalar
在偏见上进行反向处理非常容易( db = np.sum(dz,axis =(0,1,2),keepdims = true)),并且可以使用Stride技巧来处理权重通过重塑DZ,然后使用点产物重新验证的输入(或轴上或Einsum上的Tensordot)。
def _striding(array, stride_size, filter_shapes, Wout=None, Hout=None):
strides = (array.strides[0], array.strides[1] * stride_size, array.strides[2] * stride_size, array.strides[1], array.strides[2], array.strides[3])
strided = as_strided(array, shape=(array.shape[0], Hout, Wout, filter_shapes[0], filter_shapes[1], array.shape[3]), strides=strides, writeable=False)
return strided
Hout = (A_prev.shape[1] - 2) // 1 + 1
Wout = (A_prev.shape[2] - 2) // 1 + 1
x_flat = _striding(array=A_prev, stride_size=2, filter_shapes=(2,2),
Wout=Wout, Hout=Hout).reshape(-1, 2 * 2 * A_prev.shape[3])
dout_descendant_flat = dout_descendant.reshape(-1, n_C)
dW = x_flat.T @ dout_descendant_flat # shape (fh * fw * n_prev_C, C)
dW = dW.reshape(fh, fw, n_prev_C, C)
这给出了与慢速版本中 dw
相同的结果。但是,做类似的事情以使衍生物WRT进入应该产生相同结果的输入。这是我所做的:
dZ_pad = np.pad(dZ,((0,0),(2,2),(2,2),(0,0)), mode='constant', constant_values = (0,0)) # padding to get the same shape as A_prev
dZ_pad_reshaped = _striding(array=dZ_pad, stride_size=1, filter_shapes=(2,2),
Wout=4, Hout=4) # the Hout and Wout dims are from the unpadded dims of A_prev
Wrot180 = np.rot90(W, 2, axes=(0,1)) # the filter height and width are in the first two axes, which we want to rotate
dA_prev = np.tensordot(dZ_pad_reshaped, Wrot180, axes=([3,4,5],[0,1,3]))
da_prev的形状是正确的,但是由于某种原因,结果与慢版本并不相同
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
好的,事实证明,错误是与几件事有关:
dz
需要相对于正向传播中的步幅扩张dz
>的扩张需要以大步1(无论向前传播中的步幅选择如何)来调用带有填充输入的输出高度和宽度(不是原始的,未添加的输入 - 这是我花了几天的时间来调试的主要错误是相关的代码下面的评论,解释了形状和操作以及一些更多的阅读资源。我还包括了远期传播。
我应该注意,经过几天的调试,编写各种功能,阅读等。一段时间后,变量名称更改了,因此,为了易于阅读,以下是我的问题中定义的变量的名称,然后在代码中等效的变量。下图:
a_prev
是x
dz
是dout_descendant
Hout
是dout_descendant的高度
wout
是dout_descendant
的宽度(正如人们期望所有对
self
的参考是对这些功能的一部分)这个答案在这里,因为stackoverflow或github上的所有其他来源我都可以发现用于磨练1(不需要扩张
dz
),或者它们使用的非常复杂奇特的索引操作非常难以遵循(例如 orOK, turns out the error was to do with several things:
dZ
needed to be dilated relative to the stride in the forward propagationdZ
(done after dilation ofdZ
) needed to be called with stride 1 (no matter the stride choice in the forward propagation) with the output heights and widths of the padded input (not the original, unpadded input -- this was the main mistake that took me days to debug)the relevant code is below with comments explaining shapes and operations as well as some further sources for reading. i've also included the forward propagation.
i should note that after days of debugging, writing various functions, reading etc. the variable names changed after a while, so for the ease of reading, here are the names of the variables as defined in my question and then their equivalent in the code below:
A_prev
isx
dZ
isdout_descendant
Hout
is the height ofdout_descendant
Wout
is the width ofdout_descendant
(as one would expect all references to
self
are to the class these functions are part of)i've left this answer here, because all the other sources on stackoverflow or github i could find that used numpy stride tricks were implemented for convolutions of stride 1 (which doesn't require dilation of
dZ
) or they used very complex fancy indexing operations that were extremely hard to follow (e.g. https://sgugger.github.io/convolution-in-depth.html#convolution-in-depth or https://github.com/parasdahal/deepnet/blob/51a9e61c351138b7dc637f4b748a0e6ca2e15595/deepnet/im2col.py)