Jenkins:在两次作业运行之间存储变量的好方法是什么?

发布于 2024-12-20 04:42:19 字数 244 浏览 4 评论 0原文

我有一个时间触发的作业,需要检索该作业先前运行中存储的某些值。

有没有办法在 Jenkins 环境中的作业运行之间存储值?

例如,我可以在 shell 脚本操作中编写类似 next 的内容:

XXX=`cat /hardcoded/path/xxx`
#job itself
echo NEW_XXX > /hardcoded/path/xxx

但是有更可靠的方法吗?

I have a time-triggered job which needs to retrieve certain values stored in a previous run of this job.

Is there a way to store values between job runs in the Jenkins environment?

E.g., I can write something like next in a shell script action:

XXX=`cat /hardcoded/path/xxx`
#job itself
echo NEW_XXX > /hardcoded/path/xxx

But is there a more reliable approach?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

东京女 2024-12-27 04:42:19

几个选项:

  • 将数据存储在工作区中。如果数据不重要(即,当工作区被破坏时可以对其进行破坏),那应该没问题。我只用它来缓存计算成本高昂的数据,例如预构建的库依赖项。
  • 将数据存储在文件系统中的某个固定位置。您将使 jenkins 的独立性降低,从而使迁移+备份更加复杂 - 但可能不会太多;特别是如果您将数据存储在 jenkins 的某些自定义用户子目录中。并行构建也将很棘手,而分布式构建可能是不可能的。 Jenkins 有一个 userContent 子目录,您可以使用它 - 这样该文件至少是 jenkins 安装的一部分,因此更容易迁移或备份。我为我的构建的(相当大的)代码覆盖率趋势文件执行此操作。
  • 将数据存储在不同的计算机上(例如数据库)。这设置起来比较复杂,但是您对本地计算机的详细信息的依赖性较小,并且可能更容易获得分布式和并行构建在职的。我这样做是为了维护实时变更日志。
  • 将数据存储为构建工件。这意味着查看先前构建的工件。它是安全且可重复的,并且因为 Uri 用于访问此类工件,所以也适用于分布式构建。但是,您需要处理失败的构建(您应该回顾多个版本吗?从头开始?)并且您将存储许多副本,如果是 1KB,则还好,但如果是 1GB,则不太好。这里的另一个缺点是,您可能需要将 jenkin 的安全设置打开得相当远,以允许匿名访问工件(因为您只是从 uri 下载)。

适当的解决方案将取决于您的情况。

A few options:

  • Store the data in the workspace. If the data isn't critical (i.e. it's ok to nuke it when the workspace is nuked) that should be fine. I only use this to cache expensive-to-compute data such as prebuilt library dependancies.
  • Store the data in some fixed location in the filesystem. You'll make jenkins less self-contained and thus make migrations+backups more complex - but probably not by much; especially if you store the data in some custom user-subdirectory of jenkins. parallel builds will also be tricky, and distributed builds likely impossible. Jenkins has a userContent subdirectory you could use for this - that way the file is at least part of the jenkins install and thus more easily migrated or backed up. I do this for the (rather large) code coverage trend files for my builds.
  • Store the data on a different machine (e.g. a database). This is more complicated to set up, but you're less dependant on the local machine's details, and it's probably easier to get distributed and parallel builds working. I've done this to maintain a live changelog.
  • Store the data as a build artifact. This means looking at previous build's artifacts. It's safe and repeatable, and because Uri's are used to access such artifacts, OK for distributed builds too. However, you need to deal with failed builds (should you look back several versions? start from scratch?) and you'll be storing many copies, which is just fine if it's 1KB but less fine if it's 1GB. Another downside here is that you'll probably need to open up jenkin's security settings quite far to allow annonymous access to artifacts (since you're just downloading from a uri).

The appropriate solution will depend on your situation.

姐不稀罕 2024-12-27 04:42:19

如果您使用 Pipelines 并且变量是简单类型,您可以使用参数在同一作业的运行之间存储它。

使用 properties 步骤,您可以在管道内配置参数及其默认值。配置完成后,您可以在每次运行开始时读取它们,并在结束时保存它们(作为默认值)。在声明性管道中,它可能看起来像这样:

pipeline {
  agent none
  options {
    skipDefaultCheckout true
  }
  stages {
    stage('Read Variable'){
      steps {
        script {
          try {
            variable = params.YOUR_VARIABLE
          }
          catch (Exception e) {
            echo("Could not read variable from parameters, assuming this is the first run of the pipeline. Exception: ${e}")
            variable = ""
          }
        }
      }

    }
    stage('Save Variable for next run'){
      steps {
        script {
          properties([
            parameters([
              string(defaultValue: "${variable}", description: 'Variable description', name: 'YOUR_VARIABLE', trim: true)
            ])
          ])
        }
      }
    }
  }

If you are using Pipelines and you're variable is of a simple type, you can use a parameter to store it between runs of the same job.

Using the properties step, you can configure parameters and their default values from within the pipeline. Once configured you can read them at the start of each run and save them (as default value) at the end. In the declarative pipeline it could look something like this:

pipeline {
  agent none
  options {
    skipDefaultCheckout true
  }
  stages {
    stage('Read Variable'){
      steps {
        script {
          try {
            variable = params.YOUR_VARIABLE
          }
          catch (Exception e) {
            echo("Could not read variable from parameters, assuming this is the first run of the pipeline. Exception: ${e}")
            variable = ""
          }
        }
      }

    }
    stage('Save Variable for next run'){
      steps {
        script {
          properties([
            parameters([
              string(defaultValue: "${variable}", description: 'Variable description', name: 'YOUR_VARIABLE', trim: true)
            ])
          ])
        }
      }
    }
  }
梦过后 2024-12-27 04:42:19

我会将变量从第一个作业传递到第二个作业,作为参数化 参数化构建。有关如何触发的详细信息,请参阅此问题来自另一个构建的参数化构建。

I would pass the variable from the first job to the second as a parameter in a parameterized build. See this question for more info on how to trigger a parameterized build from another build.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文