WEKA 的多层感知器:训练然后再次训练
我正在尝试使用 weka 的 MultilayerPerceptron 执行以下操作:
- 使用一小部分训练实例来训练一部分历元输入,
- 使用整组实例来训练剩余的历元。
然而,当我在代码中执行以下操作时,网络似乎会自行重置以第二次从头开始。
mlp.setTrainingTime(smallTrainingSetEpochs);
mlp.buildClassifier(smallTrainingSet);
mlp.setTrainingTime(wholeTrainingSetEpochs);
mlp.buildClassifier(wholeTrainingSet) );
我做错了什么,还是算法在 weka 中应该工作的方式?
如果您需要更多信息来回答这个问题,请告诉我。我对 weka 编程有点陌生,不确定哪些信息会有帮助。
I am trying to do the following with weka's MultilayerPerceptron:
- Train with a small subset of the training Instances for a portion of the epochs input,
- Train with whole set of Instances for the remaining epochs.
However, when I do the following in my code, the network seems to reset itself to start with a clean slate the second time.
mlp.setTrainingTime(smallTrainingSetEpochs);
mlp.buildClassifier(smallTrainingSet);
mlp.setTrainingTime(wholeTrainingSetEpochs);
mlp.buildClassifier(wholeTrainingSet);
Am I doing something wrong, or is this the way that the algorithm is supposed to work in weka?
If you need more information to answer this question, please let me know. I am kind of new to programming with weka and am unsure as to what information would be helpful.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
weka 邮件列表上的这个帖子是一个非常相似的问题给你的。
看起来这就是 weka 的多层感知器应该如何工作的。它被设计为“批量”学习器,您正在尝试增量地使用它。只有实现了 weka.classifiers.UpdateableClassifier 的分类器才能进行增量训练。
This thread on the weka mailing list is a question very similar to yours.
It seems that this is how weka's MultilayerPerceptron is supposed to work. It's designed to be a 'batch' learner, you are trying to use it incrementally. Only classifiers that implement weka.classifiers.UpdateableClassifier can be incrementally trained.