视觉变压器注意图由关键点位置 - 张量

发布于 2025-01-24 20:29:53 字数 867 浏览 2 评论 0 原文

I have trained a ViT model on TensorFlow for keypoint estimation based on https://github.com/yangsenius/TransPose and I would like to simulate the attention maps of each keypoint like this: https://raw.githubusercontent.com/yangsenius/TransPose/main/attention_map_image_dependency_transposeh_thres_0.00075.jpg

I have found the code on Pytorch but I have no idea about how to simulate it on TensorFlow:
https://github.com/yangsenius/TransPose/blob/dab9007b6f61c9c8dce04d61669a04922bbcd148/visualize.py#L128

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

我不咬妳我踢妳 2025-01-31 20:29:53

我通过获取多头注意层的上一层的输出并通过多头注意来解决它:

atten_maps_hooks = [Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_0') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_1') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_2') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_3') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_4') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_5') - 1].output)]

for i in range(len(atten_maps_hooks)):
      temp = atten_maps_hooks[i].predict(input)
      mha, scores = model.get_layer('encoded_' + str(i))(temp, temp, return_attention_scores = True)
      enc_atten_maps_hwhw.append(scores.numpy()[0].reshape(shape + shape))

I have solved it by getting the output of the previous layer of the multihead attention layer and passing it by the multihead attention:

atten_maps_hooks = [Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_0') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_1') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_2') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_3') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_4') - 1].output),
                        Model(inputs = model.input, outputs = model.layers[getLayerIndexByName(model, 'encoded_5') - 1].output)]

for i in range(len(atten_maps_hooks)):
      temp = atten_maps_hooks[i].predict(input)
      mha, scores = model.get_layer('encoded_' + str(i))(temp, temp, return_attention_scores = True)
      enc_atten_maps_hwhw.append(scores.numpy()[0].reshape(shape + shape))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文