训练神经网络添加

发布于 2024-10-02 18:12:04 字数 336 浏览 0 评论 0原文

我需要训练一个网络来将 2 个输入相乘或相加,但它似乎不能很好地近似 20000 之后的所有点 迭代。更具体地说,我在整个数据集上训练它,它对于最后一点非常接近,但看起来 就像第一个端点的情况没有任何改善一样。我对数据进行标准化,使其介于 -0.8 和 0.8 之间。这 网络本身由 2 个输入、3 个隐藏神经元和 1 个输出神经元组成。我还将网络的学习率设置为0.25, 并用作学习函数 tanh(x)。

对于数据集中最后训练的点,它非常接近,但对于第一个点,它似乎是这样 不能很好地近似。我想知道它是什么,这并不能帮助它很好地调整,无论是我正在使用的拓扑,还是 还有别的东西吗?

另外,该网络的隐藏层中有多少个神经元合适?

I need to train a network to multiply or add 2 inputs, but it doesn't seem to approximate well for all points after 20000
iterations. More specifically, I train it on the whole dataset and it approximates well for the last points, but it seems
like it isn't getting any better for the first endpoints. I normalize the data so that it is between -0.8 and 0.8. The
network itself consists of 2 inputs 3 hidden neurons and 1 output neuron. I also set the network's learning rate to 0.25,
and use as a learning function tanh(x).

It approximates really well for points that are trained last in the dataset, but for the first points it seems like it
can't approximate well. I wonder what it is, that isn't helping it adjust well, whether it is the topology I am using, or
something else?

Also how many neurons are appropriate in the hidden layer for this network?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

べ映画 2024-10-09 18:12:04

由权重={1,1}、偏差=0 和线性激活函数的单个神经元组成的网络执行两个输入数字的相加。

乘法可能更难。网络可以使用以下两种方法:

  1. 将其中一个数字转换为数字(例如二进制)并像在小学时那样执行乘法。 a*b = a*(b0*2^0 + b1*2^1 + ... + bk*2^k) = a*b0*2^0 + a*b1*2^1 + 。 .. + a*bk*2^k。这种方法很简单,但需要与输入 b 的长度(对数)成比例的可变数量的神经元。
  2. 对输入取对数,将它们相加并对结果求幂。 a*b = exp(ln(a) + ln(b)) 该网络可以处理任意长度的数字,只要它能够足够好地近似对数和指数即可。

A network consisting of a single neuron with weights={1,1}, bias=0 and linear activation function performs the addition of the two input numbers.

Multiplication may be harder. Here are two approaches that a net can use:

  1. Convert one of the numbers to digits (for example, binary) and perform multiplication as you did in elementary school. a*b = a*(b0*2^0 + b1*2^1 + ... + bk*2^k) = a*b0*2^0 + a*b1*2^1 + ... + a*bk*2^k. This approach is simple, but requires variable number of neurons proportional to the length (logarithm) of the input b.
  2. Take logarithms of the inputs, add them and exponentiate the result. a*b = exp(ln(a) + ln(b)) This network can work on numbers of any length as long as it can approximate the logarithm and exponent well enough.
初见终念 2024-10-09 18:12:04

可能为时已晚,但一个简单的解决方案是使用RNN循环神经网络网络)。

RNN SUM 2 DIGITS

将数字转换为数字后,神经网络将从左到右的数字序列中取出几个数字。

RNN 必须循环其输出之一,以便它能够自动理解有一位要进位(如果和为 2,则写入 0 并进位 1)。

要训​​练它,您需要为其提供由两个数字组成的输入(一个来自第一个数字,第二个来自第二个数字)和所需的输出。 RNN 最终会找到如何求和的方法。

请注意,此 RNN 只需要知道以下 8 种情况即可学习如何对两个数字求和:

  • 1 + 1, 0 + 0, 1 + 0, 0 + 1 以及进位
  • 1 + 1, 0 + 0, 1 + 0, 0 + 1 无进位

It may be too late, but a simple solution is to use a RNN (Recurrent Neural Network).

RNN SUM TWO DIGITS

After converting your numbers to digits, your NN will take a couple of digits from the sequence of digits from left to right.

The RNN has to loop one of its output so that it can automatically understand that there is a digit to carry (if the sum is 2, write a 0 and carry 1).

To train it, you'll need to give it the inputs consisting of two digits (one from the first number, the second from the second number) and the desired output. And the RNN will end up finding how to do the sum.

Notice that this RNN will only need to know the 8 following cases to learn how to sum two numbers:

  • 1 + 1, 0 + 0, 1 + 0, 0 + 1 with carry
  • 1 + 1, 0 + 0, 1 + 0, 0 + 1 without carry
戴着白色围巾的女孩 2024-10-09 18:12:04

如果你想让事情保持神经(链接有权重,神经元通过权重计算输入的思考总和,并根据总和的 sigmoid 回答 0 或 1,并且你使用梯度),那么你应该将隐藏层的神经元视为分类器。他们定义了一条线,将输入空间分为不同的类别:第一个类别对应于神经元响应 1 的部分,另一个类别对应于神经元响应 0 的部分。隐藏层的第二个神经元将定义另一个分隔,依此类推。输出神经元通过调整其输出权重来组合隐藏层的输出,以对应于您在学习过程中呈现的输出。
因此,单个神经元会将输入空间分为两类(可能对应于添加,具体取决于学习数据库)。两个神经元将能够定义 4 个类别。三个神经元 8 个类别等。将隐藏神经元的输出视为 2 的幂:h1*2^0 + h2*2^1+...+hn*2^n,其中hi 是隐藏神经元i 的输出。注意:您将需要 n 个输出神经元。这回答了有关要使用的隐藏神经元数量的问题。
但是神经网络不计算加法。它将其视为基于所学知识的分类问题。它永远无法为超出其学习基础的值生成正确的答案。在学习阶段,它会调整权重以放置分隔符(二维线),从而产生正确的答案。如果您的输入位于 [0,10] 中,它将学习为 [0,10]^2 中的值相加生成正确的答案,但永远不会给出12 + 11 的好答案。
如果您的最后一个值很好地学习并且第一个值被遗忘,请尝试降低学习率:最后一个示例的权重修改(取决于梯度)可能会覆盖第一个示例(如果您使用随机反向传播)。确保您的学习基础是公平的。您还可以更频繁地展示学得不好的例子。并尝试多种学习率值,直到找到一个好的值。

If you want to keep things neural (links have weights, the neuron calculates the the ponderated sum of the inputs by the weights and answers 0 or 1 depending on the sigmoid of the sum and you use backpropagation of the gradient), then you should think about a neuron of the hidden layer as classifiers. They define a line that separates the input space in to classes: 1 class corresponds to the part where the neuron responds 1, the other when it responds 0. A second neuron of the hidden layer will define another separation and so forth. The output neuron combines the outputs of the hidden layer by adapting its weights for its output to correspond to the ones you presented during learning.
Hence, a single neuron will classify the input space in 2 classes (maybe corresponding to a addition depending on the learning database). Two neurons will be able to define 4 classes. Three neurons 8 classes, etc. Think of the output of the hidden neurons as powers of 2: h1*2^0 + h2*2^1+...+hn*2^n, where hi is the output of hidden neuron i. NB: you will need n output neurons. This answers the question about the number of hidden neurons to use.
But the NN doesn't compute the addition. It sees it as a classification problem based on what it learned. It will never be able to generate a correct answer for values that are out of its learning base. During the learning phase, it adjusts the weights in order to place the separators (lines in 2D) so as to produce the correct answer. If your inputs are in [0,10], it will learn to produce to correct answers for additions of values in [0,10]^2 but will never give a good answer for 12 + 11.
If your last values are well learned and the first forgotten, try to lower the learning rate: the modifications of the weights (depending on the gradient) of the last examples may override the first one (if you're using stochastic backprop). Be sure that your learning base is fair. You can also present the badly learned examples more often. And try many values of the learning rate until you find a good one.

行雁书 2024-10-09 18:12:04

我也试图做同样的事情。训练了 2、3、4 位加法并能够达到 97% 的准确率。您可以使用其中一种神经网络类型来实现

使用神经网络进行序列到序列学习

以下链接提供了使用 keras 的 Juypter Notebook 进行的示例程序:

https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py

希望有帮助。

在此附上代码供参考。

from __future__ import print_function
from keras.models import Sequential
from keras import layers
import numpy as np
from six.moves import range


class CharacterTable(object):
    """Given a set of characters:
    + Encode them to a one hot integer representation
    + Decode the one hot integer representation to their character output
    + Decode a vector of probabilities to their character output
    """
    def __init__(self, chars):
        """Initialize character table.
        # Arguments
            chars: Characters that can appear in the input.
        """
        self.chars = sorted(set(chars))
        self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
        self.indices_char = dict((i, c) for i, c in enumerate(self.chars))

    def encode(self, C, num_rows):
        """One hot encode given string C.
        # Arguments
            num_rows: Number of rows in the returned one hot encoding. This is
                used to keep the # of rows for each data the same.
        """
        x = np.zeros((num_rows, len(self.chars)))
        for i, c in enumerate(C):
            x[i, self.char_indices[c]] = 1
        return x

    def decode(self, x, calc_argmax=True):
        if calc_argmax:
            x = x.argmax(axis=-1)
        return ''.join(self.indices_char[x] for x in x)


class colors:
    ok = '\033[92m'
    fail = '\033[91m'
    close = '\033[0m'

# Parameters for the model and dataset.
TRAINING_SIZE = 50000
DIGITS = 3
INVERT = True

# Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
# int is DIGITS.
MAXLEN = DIGITS + 1 + DIGITS

# All the numbers, plus sign and space for padding.
chars = '0123456789+ '
ctable = CharacterTable(chars)

questions = []
expected = []
seen = set()
print('Generating data...')
while len(questions) < TRAINING_SIZE:
    f = lambda: int(''.join(np.random.choice(list('0123456789'))
                    for i in range(np.random.randint(1, DIGITS + 1))))
    a, b = f(), f()
    # Skip any addition questions we've already seen
    # Also skip any such that x+Y == Y+x (hence the sorting).
    key = tuple(sorted((a, b)))
    if key in seen:
        continue
    seen.add(key)
    # Pad the data with spaces such that it is always MAXLEN.
    q = '{}+{}'.format(a, b)
    query = q + ' ' * (MAXLEN - len(q))
    ans = str(a + b)
    # Answers can be of maximum size DIGITS + 1.
    ans += ' ' * (DIGITS + 1 - len(ans))
    if INVERT:
        # Reverse the query, e.g., '12+345  ' becomes '  543+21'. (Note the
        # space used for padding.)
        query = query[::-1]
    questions.append(query)
    expected.append(ans)
print('Total addition questions:', len(questions))

print('Vectorization...')
x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
for i, sentence in enumerate(questions):
    x[i] = ctable.encode(sentence, MAXLEN)
for i, sentence in enumerate(expected):
    y[i] = ctable.encode(sentence, DIGITS + 1)

# Shuffle (x, y) in unison as the later parts of x will almost all be larger
# digits.
indices = np.arange(len(y))
np.random.shuffle(indices)
x = x[indices]
y = y[indices]

# Explicitly set apart 10% for validation data that we never train over.
split_at = len(x) - len(x) // 10
(x_train, x_val) = x[:split_at], x[split_at:]
(y_train, y_val) = y[:split_at], y[split_at:]

print('Training Data:')
print(x_train.shape)
print(y_train.shape)

print('Validation Data:')
print(x_val.shape)
print(y_val.shape)

# Try replacing GRU, or SimpleRNN.
RNN = layers.LSTM
HIDDEN_SIZE = 128
BATCH_SIZE = 128
LAYERS = 1

print('Build model...')
model = Sequential()
# "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE.
# Note: In a situation where your input sequences have a variable length,
# use input_shape=(None, num_feature).
model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
# As the decoder RNN's input, repeatedly provide with the last hidden state of
# RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum
# length of output, e.g., when DIGITS=3, max output is 999+999=1998.
model.add(layers.RepeatVector(DIGITS + 1))
# The decoder RNN could be multiple layers stacked or a single layer.
for _ in range(LAYERS):
    # By setting return_sequences to True, return not only the last output but
    # all the outputs so far in the form of (num_samples, timesteps,
    # output_dim). This is necessary as TimeDistributed in the below expects
    # the first dimension to be the timesteps.
    model.add(RNN(HIDDEN_SIZE, return_sequences=True))

# Apply a dense layer to the every temporal slice of an input. For each of step
# of the output sequence, decide which character should be chosen.
model.add(layers.TimeDistributed(layers.Dense(len(chars))))
model.add(layers.Activation('softmax'))
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.summary()

# Train the model each generation and show predictions against the validation
# dataset.
for iteration in range(1, 200):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(x_train, y_train,
              batch_size=BATCH_SIZE,
              epochs=1,
              validation_data=(x_val, y_val))
    # Select 10 samples from the validation set at random so we can visualize
    # errors.
    for i in range(10):
        ind = np.random.randint(0, len(x_val))
        rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
        preds = model.predict_classes(rowx, verbose=0)
        q = ctable.decode(rowx[0])
        correct = ctable.decode(rowy[0])
        guess = ctable.decode(preds[0], calc_argmax=False)
        print('Q', q[::-1] if INVERT else q, end=' ')
        print('T', correct, end=' ')
        if correct == guess:
            print(colors.ok + '☑' + colors.close, end=' ')
        else:
            print(colors.fail + '☒' + colors.close, end=' ')
        print(guess)

I was trying to do the same. Trained 2,3,4 digit addition and was able to achive 97% accuracy. You can achieve with one of the neural network type,

Sequence to Sequence Learning with Neural Networks

A sample program with Juypter Notebook from keras is available at the following link,

https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py

Hope it helps.

Attaching the code here for reference.

from __future__ import print_function
from keras.models import Sequential
from keras import layers
import numpy as np
from six.moves import range


class CharacterTable(object):
    """Given a set of characters:
    + Encode them to a one hot integer representation
    + Decode the one hot integer representation to their character output
    + Decode a vector of probabilities to their character output
    """
    def __init__(self, chars):
        """Initialize character table.
        # Arguments
            chars: Characters that can appear in the input.
        """
        self.chars = sorted(set(chars))
        self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
        self.indices_char = dict((i, c) for i, c in enumerate(self.chars))

    def encode(self, C, num_rows):
        """One hot encode given string C.
        # Arguments
            num_rows: Number of rows in the returned one hot encoding. This is
                used to keep the # of rows for each data the same.
        """
        x = np.zeros((num_rows, len(self.chars)))
        for i, c in enumerate(C):
            x[i, self.char_indices[c]] = 1
        return x

    def decode(self, x, calc_argmax=True):
        if calc_argmax:
            x = x.argmax(axis=-1)
        return ''.join(self.indices_char[x] for x in x)


class colors:
    ok = '\033[92m'
    fail = '\033[91m'
    close = '\033[0m'

# Parameters for the model and dataset.
TRAINING_SIZE = 50000
DIGITS = 3
INVERT = True

# Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
# int is DIGITS.
MAXLEN = DIGITS + 1 + DIGITS

# All the numbers, plus sign and space for padding.
chars = '0123456789+ '
ctable = CharacterTable(chars)

questions = []
expected = []
seen = set()
print('Generating data...')
while len(questions) < TRAINING_SIZE:
    f = lambda: int(''.join(np.random.choice(list('0123456789'))
                    for i in range(np.random.randint(1, DIGITS + 1))))
    a, b = f(), f()
    # Skip any addition questions we've already seen
    # Also skip any such that x+Y == Y+x (hence the sorting).
    key = tuple(sorted((a, b)))
    if key in seen:
        continue
    seen.add(key)
    # Pad the data with spaces such that it is always MAXLEN.
    q = '{}+{}'.format(a, b)
    query = q + ' ' * (MAXLEN - len(q))
    ans = str(a + b)
    # Answers can be of maximum size DIGITS + 1.
    ans += ' ' * (DIGITS + 1 - len(ans))
    if INVERT:
        # Reverse the query, e.g., '12+345  ' becomes '  543+21'. (Note the
        # space used for padding.)
        query = query[::-1]
    questions.append(query)
    expected.append(ans)
print('Total addition questions:', len(questions))

print('Vectorization...')
x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
for i, sentence in enumerate(questions):
    x[i] = ctable.encode(sentence, MAXLEN)
for i, sentence in enumerate(expected):
    y[i] = ctable.encode(sentence, DIGITS + 1)

# Shuffle (x, y) in unison as the later parts of x will almost all be larger
# digits.
indices = np.arange(len(y))
np.random.shuffle(indices)
x = x[indices]
y = y[indices]

# Explicitly set apart 10% for validation data that we never train over.
split_at = len(x) - len(x) // 10
(x_train, x_val) = x[:split_at], x[split_at:]
(y_train, y_val) = y[:split_at], y[split_at:]

print('Training Data:')
print(x_train.shape)
print(y_train.shape)

print('Validation Data:')
print(x_val.shape)
print(y_val.shape)

# Try replacing GRU, or SimpleRNN.
RNN = layers.LSTM
HIDDEN_SIZE = 128
BATCH_SIZE = 128
LAYERS = 1

print('Build model...')
model = Sequential()
# "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE.
# Note: In a situation where your input sequences have a variable length,
# use input_shape=(None, num_feature).
model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
# As the decoder RNN's input, repeatedly provide with the last hidden state of
# RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum
# length of output, e.g., when DIGITS=3, max output is 999+999=1998.
model.add(layers.RepeatVector(DIGITS + 1))
# The decoder RNN could be multiple layers stacked or a single layer.
for _ in range(LAYERS):
    # By setting return_sequences to True, return not only the last output but
    # all the outputs so far in the form of (num_samples, timesteps,
    # output_dim). This is necessary as TimeDistributed in the below expects
    # the first dimension to be the timesteps.
    model.add(RNN(HIDDEN_SIZE, return_sequences=True))

# Apply a dense layer to the every temporal slice of an input. For each of step
# of the output sequence, decide which character should be chosen.
model.add(layers.TimeDistributed(layers.Dense(len(chars))))
model.add(layers.Activation('softmax'))
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.summary()

# Train the model each generation and show predictions against the validation
# dataset.
for iteration in range(1, 200):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(x_train, y_train,
              batch_size=BATCH_SIZE,
              epochs=1,
              validation_data=(x_val, y_val))
    # Select 10 samples from the validation set at random so we can visualize
    # errors.
    for i in range(10):
        ind = np.random.randint(0, len(x_val))
        rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
        preds = model.predict_classes(rowx, verbose=0)
        q = ctable.decode(rowx[0])
        correct = ctable.decode(rowy[0])
        guess = ctable.decode(preds[0], calc_argmax=False)
        print('Q', q[::-1] if INVERT else q, end=' ')
        print('T', correct, end=' ')
        if correct == guess:
            print(colors.ok + '☑' + colors.close, end=' ')
        else:
            print(colors.fail + '☒' + colors.close, end=' ')
        print(guess)
青春有你 2024-10-09 18:12:04

考虑一下,如果将 tanh(x) 阈值函数替换为 x 的线性函数(将其称为 ax),并对待 a,会发生什么作为每个神经元的唯一学习参数。这实际上就是您的网络将要优化的目标;它是 tanh 函数过零的近似值。

现在,当你对这种线性类型的神经元进行分层时会发生什么?当脉冲从输入到输出时,您将每个神经元的输出相乘。您正在尝试用一组乘法来近似加法。正如他们所说,这不能计算。

Think about what would happen if you replaced your tanh(x) threshold function with a linear function of x - call it a.x - and treat a as the sole learning parameter in each neuron. That's effectively what your network will be optimising towards; it's an approximation of the zero-crossing of the tanh function.

Now, what happens when you layer neurons of this linear type? You multiply the output of each neuron as the pulse goes from input to output. You're trying to approximate addition with a set of multiplications. That, as they say, does not compute.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文