我在AWS EMR中运行Pyspark作业。最近,我升级了(AWS EMR 6.4,Spark 3.1.2),并切换到Docker容器中的作业。从那以后,Stdout日志中有零星的线程转储,其以 full Thread Dump OpenJDK 64位服务器VM(25.312-B07混合模式)
开始。
我一直无法弄清楚它们为什么发生。 Stderr没有相关的错误或警告,并且工作不受影响。但是,这些线程转储使得很难阅读stdout日志,而且我无法弄清楚它们为什么会发生。我尝试过的事情包括使用先前版本的AWS/EMR甚至更简单的EMR配置,因为我怀疑AWS EMR正在某处发送 sigquit
,因为我在火花源中找不到任何可以这样做的东西(除了由Spark UI启动的线程转储和禁用的Spark Task读取器)。
对于该怎么做的事情,我将辞职,指示JVM重定向这些线程转储,甚至忽略它们的信号,如果这是一个选择。我愿意接受替代建议。
我知道 -XRS
,但我怀疑这不是我想要的,因为它可能会在第一个 sigquit
上杀死该过程。
I have PySpark jobs running in AWS EMR. Recently, I upgraded (AWS EMR 6.4, Spark 3.1.2) and switched to running the job in a docker container. Ever since, there are sporadic thread dumps in the stdout logs that start with Full thread dump OpenJDK 64-Bit Server VM (25.312-b07 mixed mode)
.
I've been unable to figure out why they occur. There are no associated errors or warnings in stderr and the job is unaffected. However, these thread dumps make it difficult to read the stdout logs, and I have been unable to figure out why they are happening. Things I have tried include using previous versions of AWS/EMR and even simpler EMR configurations, as I suspected that AWS EMR is sending SIGQUIT
somewhere, since I did not find anything in the Spark source that would do it (except for thread dumps initiated by the Spark UI, and the Spark task reader, which is disabled).
At a loss for what to do, I would resign to instructing the JVM to redirect these thread dumps or even ignore the signal for them, if that's an option. I am open to alternative suggestions.
I am aware of -Xrs
but I suspect it's not what I want, since it would likely kill the process on the first SIGQUIT
.
发布评论
评论(1)
我有一个解决方案,可以查看实例本身或其他UNIX环境的日志。
通过通过
mawk
过滤器输出输出,我们可以在读取或尾随日志时删除stacktrace。在AWS Linux上,这需要您从EPEL存储库中安装
mawk
软件包。创建一个创建TMP文件名,尾声和过滤输入文件并将其写入TMP文件的函数。然后使用
少
打开TMP文件,然后在用户关闭少
时删除TMP文件。该过滤器删除
^全线程dump
和^\ [[0-9
'之间的所有行这对我有用,因为我的日志以
开头[2023-09-7 ...
您现在可以像以下资源一样查看日志
:
I have a solution for viewing the logs on the the instance itself or other Unix environment.
By piping the output through a
mawk
filter we can remove the stacktrace when reading or tailing the logs.On AWS Linux this requires you to install the
mawk
package from the Epel repository.Create a function that creates a tmp filename, tails and filters the input file and write it to the tmp file. Then open the tmp file with
less
and remove the tmp file when the user closesless
.The filter removes all lines between
^Full thread dump
and^\[[0-9
Which works for me because my logs start with
[2023-09-7 ...
You can now view the logs like this
Sources: