进一步培训DL模型
Colab中的所有数据集训练模型
model.save('model_trained_on_25%_ofmydata.h5')
new_model=load_model('model_trained_on_25%_ofmydata.h5')
new_model.fit(remaining_50%_ofmydata)
new_model.save(remaining_50%_ofmydata)
new_model2=load_model('remaining_50%_ofmydata.h5')
new_model2.fit(remaining_75%_ofmydata)
new_model2.save(remaining_75%_ofmydata)
new_model3=load_model('remaining_75%_ofmydata.h5')
new_model3.fit(remaining_100%_ofmydata)
new_model3.save(remaining_100%_ofmydata)
由于RAM能力,我无法使用 数据,我不想仅在新数据上重新训练,我如何实现该模型不会忘记从以前的培训中学到的知识?
I cannot train my model with all my datasets in colab, because of RAM capability, so i was thinking to do something like:
model.save('model_trained_on_25%_ofmydata.h5')
new_model=load_model('model_trained_on_25%_ofmydata.h5')
new_model.fit(remaining_50%_ofmydata)
new_model.save(remaining_50%_ofmydata)
new_model2=load_model('remaining_50%_ofmydata.h5')
new_model2.fit(remaining_75%_ofmydata)
new_model2.save(remaining_75%_ofmydata)
new_model3=load_model('remaining_75%_ofmydata.h5')
new_model3.fit(remaining_100%_ofmydata)
new_model3.save(remaining_100%_ofmydata)
I have read about catastrophic forgetting, that the model forget what it learns from previous old data, i don't want to retrain only on new data, how i can achieve that the model doesn't forget what it learns from previous trainings ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论