用单热编码嵌入层
假设我有一个壁炉编码的输入数据。我将举一个示例来自 stack Exchange链接 假设我们只有2个句子:
Hope to see you soon
Nice to see you again
因此,我们有7个不同的单词,假设它们独特的整数索引(令牌化表示)是:
[0, 1, 2, 3, 4]
[5, 1, 2, 3, 6]
因此,一列表示是第一个句子希望很快能见到您那:
[[1,0,0,0,0,0,0],
[0,1,0,0,0,0,0],
[0,0,1,0,0,0,0],
[0,0,0,1,0,0,0],
[0,0,0,0,1,0,0]]
现在,我知道Keras嵌入层的工作原理。它像查找表一样工作,因此,我可以找到所有示例的所有示例。相同链接也解释了嵌入式工作层的方法。
但是,我想知道,如果我将单热编码的输入给KERAS嵌入层,它是否正常工作?因为我知道它可以与令牌化输入或一hot编码的输入一起使用。另外,它会产生性能差异吗?
Let's assume that I have one-hot encoded input data. I will give an example from that stack exchange link
Suppose that we have just 2 sentences:
Hope to see you soon
Nice to see you again
Therefore, we have 7 different words, let's say their unique integer indexes (tokenized representations) are:
[0, 1, 2, 3, 4]
[5, 1, 2, 3, 6]
Thus one-hot representation is the first sentence Hope to see you soon is that:
[[1,0,0,0,0,0,0],
[0,1,0,0,0,0,0],
[0,0,1,0,0,0,0],
[0,0,0,1,0,0,0],
[0,0,0,0,1,0,0]]
Now, I know the working principle of Keras embedding layer. It works like a lookup table and because of that reason all examples that I can find take tokenized inputs. The same link also explains the embedding layer working approach.
However, I wonder that, if I give one-hot encoded inputs to keras embedding layer, does it work properly? Because I know that it works with either tokenized inputs or one-hot encoded inputs. Also, does it create a performance difference?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论