偏差在神经网络中的作用是什么?

发布于 2024-08-26 01:45:28 字数 1455 浏览 4 评论 0 原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(18

滥情稳全场 2024-09-02 01:45:29

简而言之,偏差允许学习/存储权重的越来越多的变化......(旁注:有时给出一些阈值)。无论如何,更多变化意味着偏差将输入空间的更丰富的表示添加到模型学习/存储的权重中。 (更好的权重可以增强神经网络的猜测能力)

例如,在学习模型中,在某些分类任务中,假设/猜测在给定某些输入的情况下最好以 y=0 或 y=1 为边界...即,对于某些 x=(1,1),某些 y=0,对于某些 x=(0,1),某些 y=1。 (假设/结果的条件是我上面提到的阈值。请注意,我的示例将输入 X 设置为每个 x=双值或 2 值向量,而不是 Nate 的某个集合 X 的单值 x 输入)。

如果我们忽略偏差,许多输入可能最终由许多相同的权重表示(即学习到的权重大多出现在接近原点的位置)强> (0,0)。
然后,该模型将仅限于较少数量的良好权重,而不是它可以更好地通过偏差学习的更多良好权重。 (学习不好的权重会导致猜测效果较差或神经网络的猜测能力下降)

因此,模型最好在接近原点的地方学习,而且在尽可能多的地方学习在阈值/决策边界内。 通过偏置,我们可以使自由度接近原点,但不限于原点的直接区域。

In simpler terms, biases allow for more and more variations of weights to be learnt/stored... (side-note: sometimes given some threshold). Anyway, more variations mean that biases add richer representation of the input space to the model's learnt/stored weights. (Where better weights can enhance the neural net’s guessing power)

For example, in learning models, the hypothesis/guess is desirably bounded by y=0 or y=1 given some input, in maybe some classification task... i.e some y=0 for some x=(1,1) and some y=1 for some x=(0,1). (The condition on the hypothesis/outcome is the threshold I talked about above. Note that my examples setup inputs X to be each x=a double or 2 valued-vector, instead of Nate's single valued x inputs of some collection X).

If we ignore the bias, many inputs may end up being represented by a lot of the same weights (i.e. the learnt weights mostly occur close to the origin (0,0).
The model would then be limited to poorer quantities of good weights, instead of the many many more good weights it could better learn with bias. (Where poorly learnt weights lead to poorer guesses or a decrease in the neural net’s guessing power)

So, it is optimal that the model learns both close to the origin, but also, in as many places as possible inside the threshold/decision boundary. With the bias we can enable degrees of freedom close to the origin, but not limited to origin's immediate region.

痴意少年 2024-09-02 01:45:29

扩展 zfy 的解释

一个输入、一个神经元、一个输出的等式应如下所示:

y = a * x + b * 1    and out = f(y)

其中 x 是输入节点的值,1 是偏置节点的值;
y 可以直接是您的输出,也可以传递到函数(通常是 sigmoid 函数)中。另请注意,偏差可以是任何常数,但为了让一切变得更简单,我们总是选择 1(这可能很常见,以至于 zfy 没有显示和解释它就这样做了)。

您的网络正在尝试学习系数 a 和 b 以适应您的数据。
因此,您可以明白为什么添加元素 b * 1 可以使其更好地适应更多数据:现在您可以更改斜率和截距。

如果你有多个输入,你的方程将如下所示:

y = a0 * x0 + a1 * x1 + ... + aN * 1

请注意,该方程仍然描述一个神经元,一个输出网络;如果您有更多神经元,您只需向系数矩阵添加一维,以将输入多路复用到所有节点并将每个节点贡献求和。

您可以以矢量化格式编写,即将

A = [a0, a1, .., aN] , X = [x0, x1, ..., 1]
Y = A . XT

系数放入一个数组中,并将(输入+偏差)放入另一个数组中,您将获得所需的解决方案作为两个向量的点积(您需要转置 X 以使形状正确,我写道XT a 'X 转置')

所以最后您还可以看到您的偏差,因为这只是多一个输入来表示实际上独立于您的输入的输出部分。

Expanding on zfy's explanation:

The equation for one input, one neuron, one output should look:

y = a * x + b * 1    and out = f(y)

where x is the value from the input node and 1 is the value of the bias node;
y can be directly your output or be passed into a function, often a sigmoid function. Also note that the bias could be any constant, but to make everything simpler we always pick 1 (and probably that's so common that zfy did it without showing & explaining it).

Your network is trying to learn coefficients a and b to adapt to your data.
So you can see why adding the element b * 1 allows it to fit better to more data: now you can change both slope and intercept.

If you have more than one input your equation will look like:

y = a0 * x0 + a1 * x1 + ... + aN * 1

Note that the equation still describes a one neuron, one output network; if you have more neurons you just add one dimension to the coefficient matrix, to multiplex the inputs to all nodes and sum back each node contribution.

That you can write in vectorized format as

A = [a0, a1, .., aN] , X = [x0, x1, ..., 1]
Y = A . XT

i.e. putting coefficients in one array and (inputs + bias) in another you have your desired solution as the dot product of the two vectors (you need to transpose X for the shape to be correct, I wrote XT a 'X transposed')

So in the end you can also see your bias as is just one more input to represent the part of the output that is actually independent of your input.

伪装你 2024-09-02 01:45:29

对于我研究的所有 ML 书籍,W 始终定义为两个神经元之间的连接指数,这意味着两个神经元之间的连通性更高。

从发射神经元到目标神经元的信号越强,即 Y = w * X,为了保持神经元的生物学特性,我们需要保持 1 >=W >= -1,但在真正的回归,W 最终会是 |W| >=1 这与神经元的工作方式相矛盾。

因此,我建议 W = cos(theta),而 1 >= |cos(theta)|,并且 Y= a * X = W * X + b while a = b + W = b + cos(theta) ,b是整数。

For all the ML books I studied, the W is always defined as the connectivity index between two neurons, which means the higher connectivity between two neurons.

The stronger the signals will be transmitted from the firing neuron to the target neuron or Y = w * X as a result to maintain the biological character of neurons, we need to keep the 1 >=W >= -1, but in the real regression, the W will end up with |W| >=1 which contradicts how the neurons are working.

As a result, I propose W = cos(theta), while 1 >= |cos(theta)|, and Y= a * X = W * X + b while a = b + W = b + cos(theta), b is an integer.

顾北清歌寒 2024-09-02 01:45:29

偏见是我们的锚。这是我们建立某种基线的一种方式,我们不会低于该基线。就图表而言,可以将 y=mx+b 视为该函数的 y 截距。

输出 = 输入乘以权重值并添加偏差值,然后应用激活函数。

Bias acts as our anchor. It's a way for us to have some kind of baseline where we don't go below that. In terms of a graph, think of like y=mx+b it's like a y-intercept of this function.

output = input times the weight value and added a bias value and then apply an activation function.

苍风燃霜 2024-09-02 01:45:29

术语偏差用于像 y 截距一样调整最终输出矩阵。例如,在经典方程中,y = mx + c,如果 c = 0,则直线将始终通过 0。添加偏差项为我们的神经网络模型提供了更大的灵活性和更好的泛化性。

The term bias is used to adjust the final output matrix as the y-intercept does. For instance, in the classic equation, y = mx + c, if c = 0, then the line will always pass through 0. Adding the bias term provides more flexibility and better generalisation to our neural network model.

长不大的小祸害 2024-09-02 01:45:29

偏差有助于获得更好的方程。

想象一下输入和输出就像一个函数y = ax + b,你需要在输入(x)和输出(y)之间放置正确的线,以最小化每个点和线之间的全局误差,如果你保持这样的等式y = ax,你将只有一个用于适应的参数,即使你找到了最小化全局误差的最佳a,它也会很友好与想要的价值相去甚远。

你可以说偏差使方程更灵活地适应最佳值

The bias helps to get a better equation.

Imagine the input and output like a function y = ax + b and you need to put the right line between the input(x) and output(y) to minimise the global error between each point and the line, if you keep the equation like this y = ax, you will have one parameter for adaptation only, even if you find the best a minimising the global error it will be kind of far from the wanted value.

You can say the bias makes the equation more flexible to adapt to the best values

半﹌身腐败 2024-09-02 01:45:28

我认为偏见几乎总是有帮助的。实际上,偏置值允许您将激活函数向左或向右移动,这对于成功学习可能至关重要。

看一个简单的例子可能会有所帮助。考虑这个没有偏差的 1 输入、1 输出网络:

simple network

网络的输出是通过乘法计算的输入 (x) 乘以权重 (w0) 并将结果传递给某种激活函数(例如 sigmoid 函数)。

这是该网络针对不同 w 值计算的函数0:

网络输出,给定不同的 w0 权重

更改权重 w0 本质上改变了 sigmoid 的“陡度”。这很有用,但是如果您希望网络在 x 为 2 时输出 0 该怎么办?仅改变 sigmoid 的陡度并不会真正起作用 - 您希望能够将整个曲线向右移动

这正是偏见允许你做的事情。如果我们向该网络添加偏差,如下所示:

...然后网络的输出变为 sig(w0*x + w1*1.0)。以下是不同 w1 值的网络输出:

网络输出,给定不同的 w1 权重

w1 的权重为 -5 会使曲线向右移动,这使我们能够拥有一个当 x 为 2 时输出 0 的网络。

I think that biases are almost always helpful. In effect, a bias value allows you to shift the activation function to the left or right, which may be critical for successful learning.

It might help to look at a simple example. Consider this 1-input, 1-output network that has no bias:

simple network

The output of the network is computed by multiplying the input (x) by the weight (w0) and passing the result through some kind of activation function (e.g. a sigmoid function.)

Here is the function that this network computes, for various values of w0:

network output, given different w0 weights

Changing the weight w0 essentially changes the "steepness" of the sigmoid. That's useful, but what if you wanted the network to output 0 when x is 2? Just changing the steepness of the sigmoid won't really work -- you want to be able to shift the entire curve to the right.

That's exactly what the bias allows you to do. If we add a bias to that network, like so:

simple network with a bias

...then the output of the network becomes sig(w0*x + w1*1.0). Here is what the output of the network looks like for various values of w1:

network output, given different w1 weights

Having a weight of -5 for w1 shifts the curve to the right, which allows us to have a network that outputs 0 when x is 2.

清欢 2024-09-02 01:45:28

理解偏差的更简单方法:它在某种程度上类似于线性函数的常数 b

y = ax + b

它允许您向上移动线条并以便更好地拟合预测与数据。

如果没有 b,直线将始终穿过原点 (0, 0),并且拟合效果可能较差。

A simpler way to understand what the bias is: it is somehow similar to the constant b of a linear function

y = ax + b

It allows you to move the line up and down to fit the prediction with the data better.

Without b, the line always goes through the origin (0, 0) and you may get a poorer fit.

昔日梦未散 2024-09-02 01:45:28

以下是一些进一步的插图,显示了带有和不带有偏置单元的简单 2 层前馈神经网络在二变量回归问题上的结果。权重随机初始化并使用标准 ReLU 激活。正如我之前的答案得出的结论,如果没有偏差,ReLU 网络就无法在 (0,0) 处偏离零。

输入图像描述这里
输入图像描述这里
输入图像描述这里

Here are some further illustrations showing the result of a simple 2-layer feed forward neural network with and without bias units on a two-variable regression problem. Weights are initialized randomly and standard ReLU activation is used. As the answers before me concluded, without the bias the ReLU-network is not able to deviate from zero at (0,0).

enter image description here
enter image description here
enter image description here

审判长 2024-09-02 01:45:28

两种不同类型的参数可以
在训练过程中进行调整
ANN,权重和值
激活函数。这是
不切实际,如果
应该只有一个参数
调整。为了解决这个问题
偏置神经元被发明。偏见
神经元位于一层,相连
到下一层的所有神经元,
但上一层中没有,它
总是发出 1。由于偏置神经元
发出 1 个权重,连接到
偏置神经元,直接添加到
其他权重的总和
(方程2.1),就像t值一样
在激活函数中。1

它不切实际的原因是你同时调整权重和值,因此对权重的任何更改都可以抵消对先前数据实例有用的值的更改...添加一个不更改值的偏置神经元可以让您控制该层的行为。

此外,偏差允许您使用单个神经网络来表示类似的情况。考虑由以下神经网络表示的 AND 布尔函数:

ANN
(来源:aihorizo​​n.com

  • w0 对应于b
  • w1 对应于x1
  • w2 对应于 x2

单个感知器可用于
代表许多布尔函数。

例如,如果我们假设布尔值
1(真)和 -1(假),然后是 1
使用双输入感知器的方法
实现 AND 函数的方法是设置
权重 w0 = -3,w1 = w2 = .5。
这个感知器可以被做成
将 OR 函数表示为
将阈值更改为 w0 = -.3。在
事实上,AND 和 OR 可以看作
m-of-n 函数的特殊情况:
也就是说,函数中至少有 m 个
感知器的 n 个输入必须是
真的。 OR 函数对应于
m = 1 和 m = n 的 AND 函数。
任何 m-of-n 函数都很容易
使用感知器表示为
将所有输入权重设置为相同
值(例如,0.5),然后设置
相应的阈值w0。

感知器可以代表所有
原始布尔函数 AND、OR、
NAND(1 个与)和 NOR(1 个或)。机器学习 - 汤姆·米切尔)

阈值是偏差,w0 是与偏差/阈值神经元相关的权重。

Two different kinds of parameters can
be adjusted during the training of an
ANN, the weights and the value in the
activation functions. This is
impractical and it would be easier if
only one of the parameters should be
adjusted. To cope with this problem a
bias neuron is invented. The bias
neuron lies in one layer, is connected
to all the neurons in the next layer,
but none in the previous layer and it
always emits 1. Since the bias neuron
emits 1 the weights, connected to the
bias neuron, are added directly to the
combined sum of the other weights
(equation 2.1), just like the t value
in the activation functions.1

The reason it's impractical is because you're simultaneously adjusting the weight and the value, so any change to the weight can neutralize the change to the value that was useful for a previous data instance... adding a bias neuron without a changing value allows you to control the behavior of the layer.

Furthermore the bias allows you to use a single neural net to represent similar cases. Consider the AND boolean function represented by the following neural network:

ANN
(source: aihorizon.com)

  • w0 corresponds to b.
  • w1 corresponds to x1.
  • w2 corresponds to x2.

A single perceptron can be used to
represent many boolean functions.

For example, if we assume boolean values
of 1 (true) and -1 (false), then one
way to use a two-input perceptron to
implement the AND function is to set
the weights w0 = -3, and w1 = w2 = .5.
This perceptron can be made to
represent the OR function instead by
altering the threshold to w0 = -.3. In
fact, AND and OR can be viewed as
special cases of m-of-n functions:
that is, functions where at least m of
the n inputs to the perceptron must be
true. The OR function corresponds to
m = 1 and the AND function to m = n.
Any m-of-n function is easily
represented using a perceptron by
setting all input weights to the same
value (e.g., 0.5) and then setting the
threshold w0 accordingly.

Perceptrons can represent all of the
primitive boolean functions AND, OR,
NAND ( 1 AND), and NOR ( 1 OR). Machine Learning- Tom Mitchell)

The threshold is the bias and w0 is the weight associated with the bias/threshold neuron.

离去的眼神 2024-09-02 01:45:28

偏差不是一个NN术语。这是一个需要考虑的通用代数术语。

Y = M*X + C(直线方程)

现在如果C(Bias) =​​ 0那么,直线将始终通过原点,即( 0,0),并且仅取决于一个参数,即 M,即斜率,因此我们可以玩的东西较少。

C,即偏差采用任意数字,并且具有移动图形的活动,因此能够表示更复杂的情况。

在逻辑回归中,目标的期望值通过链接函数进行转换,以将其值限制在单位区间内。 方式,模型预测可以被视为主要结果概率,如下所示:

维基百科上的 Sigmoid 函数

通过这种 是神经网络图中打开和关闭神经元的最后一个激活层。这里偏差也发挥了作用,它灵活地改变曲线以帮助我们绘制模型。

The bias is not an NN term. It's a generic algebra term to consider.

Y = M*X + C (straight line equation)

Now if C(Bias) = 0 then, the line will always pass through the origin, i.e. (0,0), and depends on only one parameter, i.e. M, which is the slope so we have less things to play with.

C, which is the bias takes any number and has the activity to shift the graph, and hence able to represent more complex situations.

In a logistic regression, the expected value of the target is transformed by a link function to restrict its value to the unit interval. In this way, model predictions can be viewed as primary outcome probabilities as shown:

Sigmoid function on Wikipedia

This is the final activation layer in the NN map that turns on and off the neuron. Here also bias has a role to play and it shifts the curve flexibly to help us map the model.

说谎友 2024-09-02 01:45:28

神经网络中没有偏置的层只不过是输入向量与矩阵的乘法。 (输出向量可能会通过 sigmoid 函数进行标准化并用于多层 ANN之后,但这并不重要。)

这意味着您使用的是线性函数,因此全零的输入将始终映射到全零的输出。对于某些系统来说,这可能是一个合理的解决方案,但总的来说,它的限制性太大。

使用偏差,您可以有效地向输入空间添加另一个维度,该维度始终采用值 1,因此您可以避免输入向量全为零。您不会因此失去任何通用性,因为您训练的权重矩阵不需要是满射的,因此它仍然可以映射到以前可能的所有值。

2D ANN:

对于将二维映射到一维的 ANN,如再现 AND 或 OR(或 XOR)函数,您可以将神经元网络视为执行以下操作:

在 2D 平面上标记输入向量的所有位置。因此,对于布尔值,您需要标记 (-1,-1)、(1,1)、(-1,1)、(1,-1)。您的人工神经网络现在所做的是在 2d 平面上画一条直线,将正输出与负输出值分开。

如果没有偏置,这条直线必须经过零,而如果有偏置,你可以随意将它放在任何地方。
因此,您会发现,如果没有偏见,您将面临 AND 函数的问题,因为您不能将 (1,-1) (-1,1) 都设为负数边。 (他们不允许在线。)对于 OR 函数来说,问题是相同的。然而,有了偏见,就很容易划清界限。

请注意,在这种情况下,即使有偏差也无法求解 XOR 函数。

A layer in a neural network without a bias is nothing more than the multiplication of an input vector with a matrix. (The output vector might be passed through a sigmoid function for normalisation and for use in multi-layered ANN afterwards, but that’s not important.)

This means that you’re using a linear function and thus an input of all zeros will always be mapped to an output of all zeros. This might be a reasonable solution for some systems but in general it is too restrictive.

Using a bias, you’re effectively adding another dimension to your input space, which always takes the value one, so you’re avoiding an input vector of all zeros. You don’t lose any generality by this because your trained weight matrix needs not be surjective, so it still can map to all values previously possible.

2D ANN:

For a ANN mapping two dimensions to one dimension, as in reproducing the AND or the OR (or XOR) functions, you can think of a neuronal network as doing the following:

On the 2D plane mark all positions of input vectors. So, for boolean values, you’d want to mark (-1,-1), (1,1), (-1,1), (1,-1). What your ANN now does is drawing a straight line on the 2d plane, separating the positive output from the negative output values.

Without bias, this straight line has to go through zero, whereas with bias, you’re free to put it anywhere.
So, you’ll see that without bias you’re facing a problem with the AND function, since you can’t put both (1,-1) and (-1,1) to the negative side. (They are not allowed to be on the line.) The problem is equal for the OR function. With a bias, however, it’s easy to draw the line.

Note that the XOR function in that situation can’t be solved even with bias.

回首观望 2024-09-02 01:45:28

当您使用人工神经网络时,您很少了解您想要学习的系统的内部结构。有些事情是无法在没有偏见的情况下学习的。例如,看一下以下数据:(0, 1)、(1, 1)、(2, 1),基本上是将任何 x 映射到 1 的函数。

如果您有一个单层网络(或线性映射 ) ),你找不到解决方案。然而,如果你有偏见,那是微不足道的!

在理想的设置中,偏差还可以将所有点映射到目标点的平均值,并让隐藏神经元对与该点的差异进行建模。

When you use ANNs, you rarely know about the internals of the systems you want to learn. Some things cannot be learned without a bias. E.g., have a look at the following data: (0, 1), (1, 1), (2, 1), basically a function that maps any x to 1.

If you have a one layered network (or a linear mapping), you cannot find a solution. However, if you have a bias it's trivial!

In an ideal setting, a bias could also map all points to the mean of the target points and let the hidden neurons model the differences from that point.

情深如许 2024-09-02 01:45:28

单独修改神经元权重仅用于操纵传递函数的形状/曲率,而不是其平衡/零交叉点。

引入偏置神经元允许您沿着输入轴水平(左/右)移动传递函数曲线,同时保持形状/曲率不变。
这将允许网络产生与默认值不同的任意输出,因此您可以自定义/转移输入到输出映射以满足您的特定需求。

请参阅此处的图形解释:
http://www.heatonresearch.com/wiki/Bias

Modification of neuron WEIGHTS alone only serves to manipulate the shape/curvature of your transfer function, and not its equilibrium/zero crossing point.

The introduction of bias neurons allows you to shift the transfer function curve horizontally (left/right) along the input axis while leaving the shape/curvature unaltered.
This will allow the network to produce arbitrary outputs different from the defaults and hence you can customize/shift the input-to-output mapping to suit your particular needs.

See here for graphical explanation:
http://www.heatonresearch.com/wiki/Bias

浅沫记忆 2024-09-02 01:45:28

我的硕士论文(例如第59页)中的几个实验中,我发现偏差对于第一层可能很重要,但特别是在最后的全连接层,它似乎并没有发挥很大的作用。

这可能高度依赖于网络架构/数据集。

In a couple of experiments in my masters thesis (e.g. page 59), I found that the bias might be important for the first layer(s), but especially at the fully connected layers at the end it seems not to play a big role.

This might be highly dependent on the network architecture / dataset.

就此别过 2024-09-02 01:45:28

如果您正在处理图像,您实际上可能更愿意根本不使用偏差。从理论上讲,这样您的网络将更加独立于数据量,例如图片是暗的还是明亮的和生动的。网络将通过研究数据中的相对论来学习完成它的工作。许多现代神经网络都利用了这一点。

对于其他存在偏差的数据可能至关重要。这取决于您正在处理的数据类型。如果你的信息是大小不变的——如果输入 [1,0,0.1] 应该得到与输入 [100,0,10] 相同的结果,那么如果没有偏差,你的情况可能会更好。

If you're working with images, you might actually prefer to not use a bias at all. In theory, that way your network will be more independent of data magnitude, as in whether the picture is dark, or bright and vivid. And the net is going to learn to do it's job through studying relativity inside your data. Lots of modern neural networks utilize this.

For other data having biases might be critical. It depends on what type of data you're dealing with. If your information is magnitude-invariant --- if inputting [1,0,0.1] should lead to the same result as if inputting [100,0,10], you might be better off without a bias.

兰花执着 2024-09-02 01:45:28

偏差决定了您的重量将旋转多少角度。

在二维图表中,权重和偏差可以帮助我们找到输出的决策边界。

假设我们需要构建一个 AND 函数,输入(p)-输出(t)对应该是

{p=[0,0], t=0},{p=[1,0], t=0},{p=[0,1], t=0},{p=[1 ,1], t=1}

在此处输入图像描述

现在我们需要找到一个决策边界,理想的边界应该是:

在此处输入图像描述

看到了吗? W 垂直于我们的边界。因此,我们说W决定了边界的方向。

然而,第一次就找到正确的W是很困难的。大多数情况下,我们随机选择原始W值。因此,第一个边界可能是这样的:
输入图片此处描述

现在边界平行于 y 轴。

我们想要旋转边界。如何?

通过改变W。

所以,我们使用学习规则函数:W'=W+P:

在此处输入图像描述

W'=W+P 相当于 W' = W + bP,而 b=1。

因此,通过改变b(bias)的值,就可以决定W'和W之间的角度。这就是“ANN的学习规则”。

您还可以阅读 Martin T. Hagan / Howard B. Demuth / Mark H 的神经网络设计比尔,第 4 章“感知器学习规则”

Bias determines how much angle your weight will rotate.

In a two-dimensional chart, weight and bias can help us to find the decision boundary of outputs.

Say we need to build a AND function, the input(p)-output(t) pair should be

{p=[0,0], t=0},{p=[1,0], t=0},{p=[0,1], t=0},{p=[1,1], t=1}

Enter image description here

Now we need to find a decision boundary, and the ideal boundary should be:

Enter image description here

See? W is perpendicular to our boundary. Thus, we say W decided the direction of boundary.

However, it is hard to find correct W at first time. Mostly, we choose original W value randomly. Thus, the first boundary may be this:
enter image description here

Now the boundary is parallel to the y axis.

We want to rotate the boundary. How?

By changing the W.

So, we use the learning rule function: W'=W+P:

Enter image description here

W'=W+P is equivalent to W' = W + bP, while b=1.

Therefore, by changing the value of b(bias), you can decide the angle between W' and W. That is "the learning rule of ANN".

You could also read Neural Network Design by Martin T. Hagan / Howard B. Demuth / Mark H. Beale, chapter 4 "Perceptron Learning Rule"

┼── 2024-09-02 01:45:28

以简单的方式思考,如果您有 y=w1*x,其中 y 是输出,w1 是权重,请想象一个条件其中 x=0y=w1*x 等于 0

如果你想更新你的体重,你必须计算 delw=target-y 的变化量,其中 target 是你的目标输出。在这种情况下,'delw'不会改变,因为y计算为0。因此,假设如果您可以添加一些额外的值,它将有助于y = w1< em>x + w01,其中bias=1,可以调整权重以获得正确的bias。考虑下面的例子。

就直线斜率而言,截距是线性方程的一种特定形式。

y = mx + b

检查图像

图像

这里b是 (0,2)

如果你想将其增加到 (0,3),你将如何通过改变 b 的值来做到这一点?

To think in a simple way, if you have y=w1*x where y is your output and w1 is the weight, imagine a condition where x=0 then y=w1*x equals to 0.

If you want to update your weight you have to compute how much change by delw=target-y where target is your target output. In this case 'delw' will not change since y is computed as 0. So, suppose if you can add some extra value it will help y = w1x + w01, where bias=1 and weight can be adjusted to get a correct bias. Consider the example below.

In terms of line slope, intercept is a specific form of linear equations.

y = mx + b

Check the image

image

Here b is (0,2)

If you want to increase it to (0,3) how will you do it by changing the value of b the bias.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文