我如何停止在我的AWS EMR Spark Job stdout日志中看到JVM完整线程转储?

发布于 2025-01-25 01:13:28 字数 546 浏览 0 评论 0 原文

我在AWS EMR中运行Pyspark作业。最近,我升级了(AWS EMR 6.4,Spark 3.1.2),并切换到Docker容器中的作业。从那以后,Stdout日志中有零星的线程转储,其以 full Thread Dump OpenJDK 64位服务器VM(25.312-B07混合模式)开始。

我一直无法弄清楚它们为什么发生。 Stderr没有相关的错误或警告,并且工作不受影响。但是,这些线程转储使得很难阅读stdout日志,而且我无法弄清楚它们为什么会发生。我尝试过的事情包括使用先前版本的AWS/EMR甚至更简单的EMR配置,因为我怀疑AWS EMR正在某处发送 sigquit ,因为我在火花源中找不到任何可以这样做的东西(除了由Spark UI启动的线程转储和禁用的Spark Task读取器)。

对于该怎么做的事情,我将辞职,指示JVM重定向这些线程转储,甚至忽略它们的信号,如果这是一个选择。我愿意接受替代建议。

我知道 -XRS ,但我怀疑这不是我想要的,因为它可能会在第一个 sigquit 上杀死该过程。

I have PySpark jobs running in AWS EMR. Recently, I upgraded (AWS EMR 6.4, Spark 3.1.2) and switched to running the job in a docker container. Ever since, there are sporadic thread dumps in the stdout logs that start with Full thread dump OpenJDK 64-Bit Server VM (25.312-b07 mixed mode).

I've been unable to figure out why they occur. There are no associated errors or warnings in stderr and the job is unaffected. However, these thread dumps make it difficult to read the stdout logs, and I have been unable to figure out why they are happening. Things I have tried include using previous versions of AWS/EMR and even simpler EMR configurations, as I suspected that AWS EMR is sending SIGQUIT somewhere, since I did not find anything in the Spark source that would do it (except for thread dumps initiated by the Spark UI, and the Spark task reader, which is disabled).

At a loss for what to do, I would resign to instructing the JVM to redirect these thread dumps or even ignore the signal for them, if that's an option. I am open to alternative suggestions.

I am aware of -Xrs but I suspect it's not what I want, since it would likely kill the process on the first SIGQUIT.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

╭⌒浅淡时光〆 2025-02-01 01:13:29

我有一个解决方案,可以查看实例本身或其他UNIX环境的日志。

通过通过 mawk 过滤器输出输出,我们可以在读取或尾随日志时删除stacktrace。

在AWS Linux上,这需要您从EPEL存储库中安装 mawk 软件包。

sudo amazon-linux-extras install epel -y
sudo yum install mawk -y

创建一个创建TMP文件名,尾声和过滤输入文件并将其写入TMP文件的函数。然后使用打开TMP文件,然后在用户关闭时删除TMP文件。
该过滤器删除^全线程dump ^\ [[0-9 '之间的所有行
这对我有用,因为我的日志以开头[2023-09-7 ...

less_log() { 
  tmp_file=$(mktemp)
  tail -f -n +1 $1 | mawk -W interactive '/^Full thread dump/{f=1} /^\[[0-9]/{f=0} !f' > "$tmp_file" &
  less "$tmp_file"; kill %; rm "$tmp_file"
}

您现在可以像以下资源一样查看日志

less_log /var/log/hadoop-yarn/containers/application_1693901863825_0025/container_1693901863825_0025_01_000001/stdout

I have a solution for viewing the logs on the the instance itself or other Unix environment.

By piping the output through a mawk filter we can remove the stacktrace when reading or tailing the logs.

On AWS Linux this requires you to install the mawk package from the Epel repository.

sudo amazon-linux-extras install epel -y
sudo yum install mawk -y

Create a function that creates a tmp filename, tails and filters the input file and write it to the tmp file. Then open the tmp file with less and remove the tmp file when the user closes less.
The filter removes all lines between ^Full thread dump and ^\[[0-9
Which works for me because my logs start with [2023-09-7 ...

less_log() { 
  tmp_file=$(mktemp)
  tail -f -n +1 $1 | mawk -W interactive '/^Full thread dump/{f=1} /^\[[0-9]/{f=0} !f' > "$tmp_file" &
  less "$tmp_file"; kill %; rm "$tmp_file"
}

You can now view the logs like this

less_log /var/log/hadoop-yarn/containers/application_1693901863825_0025/container_1693901863825_0025_01_000001/stdout

Sources:

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文