ConnectException:连接拒绝(连接拒绝)错误,同时获得新的通信渠道
我遇到了这个错误,我不确定自己出了什么问题!如果有人可以帮助我解决这个问题,这真的很有帮助。谢谢!
创建输出模型目录
dbutils.fs.mkdirs('/CIFAR10/models/')
Out[21]: True
from keras.models import load_model
modelpath = '/dbfs/CIFAR10/models/cifar_2000pictures.h5'
model2000.save(modelpath)
#model = load_model(modelpath)
ConnectException: Connection refused (Connection refused)
Error while obtaining a new communication channel
ConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.
Spark tip settings
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
oom(不记忆)问题 ,驱动器崩溃了,无法与驱动程序建立新连接。请尝试以下选项:
增加
spark.sql.shuffle.partitions
的值。不管集群有多大。如果您正在通过在单台计算机上运行操作的python pandas来处理较大的数据集,而不是在多台机器上运行。因此,您可以增加驱动程序节点的内存大小。
有关更多信息,请遵循本文:
https://medium.com/disney-com/disney-streaming/a-s-tep-by-step-by-step-guide-for-debugging-memory-memory-memory-memory-leaks-in-spark-spark-spark-applications-in-spark-applications--applications--applications--application e0dd05118958
OOM (Out Of Memory) issue, Drive crashed out of memory it's not able to establish a new connection with the driver. Please try the below options:
Increase the value of
spark.sql.shuffle.partitions
.No matter how big the cluster is. If you are working on dealing with larger datasets, through python pandas that run operations on a single machine, not like spark runs on multiple machines. So, you can increase the memory size of the driver node.
For more information follow this article:
https://medium.com/disney-streaming/a-step-by-step-guide-for-debugging-memory-leaks-in-spark-applications-e0dd05118958
https://blog.clairvoyantsoft.com/apache-spark-out-of-memory-issue-b63c7987fff