ConnectException:连接拒绝(连接拒绝)错误,同时获得新的通信渠道

发布于 2025-01-24 12:10:30 字数 587 浏览 0 评论 0 原文

我遇到了这个错误,我不确定自己出了什么问题!如果有人可以帮助我解决这个问题,这真的很有帮助。谢谢!

创建输出模型目录

dbutils.fs.mkdirs('/CIFAR10/models/')
Out[21]: True


from keras.models import load_model
modelpath = '/dbfs/CIFAR10/models/cifar_2000pictures.h5'
model2000.save(modelpath)​
#model = load_model(modelpath)


ConnectException: Connection refused (Connection refused)
Error while obtaining a new communication channel
ConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.
Spark tip settings

I came across this error and I'm not sure where I've gone wrong! It would be really helpful if someone could help me out on this. Thanks!

Create Output Model Directory

dbutils.fs.mkdirs('/CIFAR10/models/')
Out[21]: True


from keras.models import load_model
modelpath = '/dbfs/CIFAR10/models/cifar_2000pictures.h5'
model2000.save(modelpath)​
#model = load_model(modelpath)


ConnectException: Connection refused (Connection refused)
Error while obtaining a new communication channel
ConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.
Spark tip settings

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

萌梦深 2025-01-31 12:10:30

oom(不记忆)问题 ,驱动器崩溃了,无法与驱动程序建立新连接。请尝试以下选项:

  1. 通过过滤尽可能多的数据,使用分区列来减少执行者的负载。如果可能的话,它将在很大程度上减少数据的移动。
  2. 不正确的配置此处是由于纱线内存开销问题而导致应用程序失败(如果在纱线上运行Spark)。您可以通过设置分区大小来解决它:
    增加 spark.sql.shuffle.partitions 的值。

为什么会发生这种情况?

不管集群有多大。如果您正在通过在单台计算机上运行操作的python pandas来处理较大的数据集,而不是在多台机器上运行。因此,您可以增加驱动程序节点的内存大小。

有关更多信息,请遵循本文:

https://medium.com/disney-com/disney-streaming/a-s-tep-by-step-by-step-guide-for-debugging-memory-memory-memory-memory-leaks-in-spark-spark-spark-applications-in-spark-applications--applications--applications--application e0dd05118958

OOM (Out Of Memory) issue, Drive crashed out of memory it's not able to establish a new connection with the driver. Please try the below options:

  1. Reduce a load of executors by filtering as much data as possible, Use partition columns. if possible, it will largely decrease the movement of data.
  2. Incorrect Configuration here is a possibility that the application fails due to a YARN memory overhead issue (if Spark is running on YARN).You can resolve it by setting the partition size:
    Increase the value of spark.sql.shuffle.partitions.

Why is this happening?

No matter how big the cluster is. If you are working on dealing with larger datasets, through python pandas that run operations on a single machine, not like spark runs on multiple machines. So, you can increase the memory size of the driver node.

For more information follow this article:

https://medium.com/disney-streaming/a-step-by-step-guide-for-debugging-memory-leaks-in-spark-applications-e0dd05118958

https://blog.clairvoyantsoft.com/apache-spark-out-of-memory-issue-b63c7987fff

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文