从powershell脚本中解析YAML管道的输出变量,将输出变量解析为Terraform变量

发布于 2025-02-03 13:02:38 字数 3279 浏览 7 评论 0 原文

我一直在使用逻辑应用程序来将Azure警报发送到Slack的解决方案(以将输出从警报转换为Slack可以显示为消息的JSON模式。)逻辑应用已部署为ARM模板,为了完全保留逻辑应用程序的内容,而其余的Azure资源则由Terraform部署。 Terraform和ARM模板正在使用具有多个阶段的Azure Devops Yaml管道部署。到目前为止,我已经编写了逻辑应用程序来将警报转换为消息(逻辑应用程序已转换时帖子。)

我当前的困境是如何以编程方式包含逻辑应用程序的URL(应发送警报)在Terraform配置中。由于在可用的数据块中没有contif属性的事实,因此 logic app workflows 或for a 标准逻辑应用程序实例

为了减轻这种缺乏Terraform功能,我试图使用AZ PowerShell模块命令(Azure Cli似乎没有函数 。)。 :

$logicApp = Get-AzLogicAppTriggerCallbackUrl -ResourceGroupName "logic-app-rg" -Name "mylogicapp" -TriggerName "Manual"
$url = logicApp.Value

通过添加以下行,可以将其添加到YAML管道中:

write-host "##vso[task.setvariable variable=outputURL;isOutput=true]$url"

由于有多个阶段,并且只需要1个逻辑应用程序,它就会放置在创建核心基础结构的第一阶段(Terraform State的存储帐户)。

当我无法从将URL输出到包含Terraform的不同阶段的任务中发送数据时,就会出现困难。 YAML管道的粗糙结构(简化):

stages:
- stage: infra-1
  jobs:
  - job: deploy-common-infra
    steps:
    - script: |
        cd core-infra
        terraform init
        terraform plan
        terraform apply
        $logicApp = Get-AzLogicAppTriggerCallbackUrl -ResourceGroupName "logic-app-rg" -Name "mylogicapp" -TriggerName "Manual"
        $url = logicApp.Value
        write-host "##vso[task.setvariable variable=outputURL;isOutput=true]$url"
      name: getLogicAppURL
- stage: build
  jobs:
  - job: build
    - task: build-app
- stage: infra-2
  dependsOn:
  - infrastructure-1
  variables:
    outputURL: $[stageDependencies.infra-1.deploy-common-infra.outputs['getLogicAppURL.outputURL']]
  jobs:
  - job: deploy-infra
    - script: |
        cd infra
        terraform init
        terraform plan -var="logicAppUrl='$(outputURL)'"
        terraform apply

应该指出的是,在实际管道中,我使用专用的Terraform任务,而不是在脚本中编写Terraform命令。

我的麻烦的主要部分是我不想跳过“构建”阶段的原因被发送到Terraform是“ null”(没有发送URL!),

我已经使用过并查看了如何在 jobs and 阶段时,“ nofollow noreferrer”>,但到目前为止,他们一直在努力找到一个解决方案,使我能够跨阶段解析URL变量。 这是跨阶段解析YAML变量的唯一方法吗

?我应该尝试?)

I have been working on a solution to send Azure alerts to Slack, using a logic app (in order to transform the output from the alert into a JSON schema that Slack can display as a message.) The logic app has been deployed as an ARM template, in order to preserve the contents of the logic app fully, while the rest of the Azure resources are being deployed by Terraform. The Terraform and ARM template is being deploying with a Azure DevOps YAML pipeline, with multiple stages. So far I have written the logic app to transform alerts into messages (the logic app posts the message when the schema has been transformed.)

My current dilemma is how to programmatically include the URL of the logic app (where the alerts should be sent) in the Terraform configuration. This is made harder by the fact that there is no config attribute for the URL in the available data blocks for logic app workflows, or for a standard logic app instance.

In order to mitigate this lack of Terraform functionality, I have attempted to retrieve the logic app URL with the Az Powershell module command (the Azure CLI doesn't appear to have the functionality yet.) Using a short script I am able to get the url to trigger the logic app:

$logicApp = Get-AzLogicAppTriggerCallbackUrl -ResourceGroupName "logic-app-rg" -Name "mylogicapp" -TriggerName "Manual"
$url = logicApp.Value

By adding the following line, this can be added into the YAML pipeline:

write-host "##vso[task.setvariable variable=outputURL;isOutput=true]$url"

As there are multiple stages, and only 1 logic app needed, it is placed in the first stage, where core infrastructure is created (storage account for Terraform state.)

The difficulty arises when I am unable to send the data from the task that outputs the URL to a different stage which contains the terraform. The rough structure of the YAML pipeline (simplified):

stages:
- stage: infra-1
  jobs:
  - job: deploy-common-infra
    steps:
    - script: |
        cd core-infra
        terraform init
        terraform plan
        terraform apply
        $logicApp = Get-AzLogicAppTriggerCallbackUrl -ResourceGroupName "logic-app-rg" -Name "mylogicapp" -TriggerName "Manual"
        $url = logicApp.Value
        write-host "##vso[task.setvariable variable=outputURL;isOutput=true]$url"
      name: getLogicAppURL
- stage: build
  jobs:
  - job: build
    - task: build-app
- stage: infra-2
  dependsOn:
  - infrastructure-1
  variables:
    outputURL: $[stageDependencies.infra-1.deploy-common-infra.outputs['getLogicAppURL.outputURL']]
  jobs:
  - job: deploy-infra
    - script: |
        cd infra
        terraform init
        terraform plan -var="logicAppUrl='$(outputURL)'"
        terraform apply

It should be noted that in the real pipeline, I am using dedicated Terraform tasks, as opposed to writing Terraform commands in scripts.

The main part of my trouble comes as I don't want to skip the "build" stage, otherwise I won't have an app being deployed at the last stage (excluded from the above example pipeline.) In addition, the value that is sent to terraform is "null" (there is no URL sent!)

I have used and looked at existing answers for how to share variables across jobs and stages, and while using dependencies in a pipeline, but have so far struggled to find a solution that allows me to parse the URL variable across stages. Is this the only way to parse a YAML variable across stages, into Terraform?

(An additional question might be is this the best approach to the challenge at hand, or is there a different solution that I should be attempting?)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

微凉徒眸意 2025-02-10 13:02:38

我最终通过将脚本添加到特定作业中,为该问题提供了工作解决方案,该条件只能为该特定的Terraform模块运行,然后更改脚本,因此它将直接写入Terraform文件。例如,要在上面的管道上使用我现在拥有的解决方案构建:

stages:
- stage: infra-1
  jobs:
  - job: deploy-common-infra
    steps:
    - script: |
        cd core-infra
        terraform init
        terraform plan
        terraform apply
- stage: build
  jobs:
  - job: build
    steps:
    - task: build-app
- stage: infra-2
  dependsOn:
  - infrastructure-1
  variables:
    outputURL: $[stageDependencies.infra-1.deploy-common-infra.outputs['getLogicAppURL.outputURL']]
  jobs:
  - job: deploy-infra
    steps:
    - template: logic-app.yaml
    - script: |
        cd infra
        terraform init
        terraform plan -var="logicAppUrl='$(outputURL)'"
        terraform apply

logic-app.yaml 模板具有将URL添加到可变文件中的脚本,该文件存储在临时文件中DevOps使用的代理上的存储位置:

steps:
- script: |
    $logicApp = Get-AzLogicAppTriggerCallbackUrl -ResourceGroupName "logic-app-rg" -Name "mylogicapp" -TriggerName "Manual"
    $url = $logicApp.Value
    Add-Content -Path "$directory/infra/vars.tfvars" -Value "`nslack_url=`"$url`""

应注意, Terraform/vars.tfvars 是用于管道的存储库中的路径。通过将其添加到管道中,它现在可以成功运行。

I ended up making a work-around solution to this problem by adding the script to the specific job, with a condition that will only run for that specific Terraform module, and changing the script so it would write to the terraform file directly. For example, to build on the pipeline above with the solution I now have at hand:

stages:
- stage: infra-1
  jobs:
  - job: deploy-common-infra
    steps:
    - script: |
        cd core-infra
        terraform init
        terraform plan
        terraform apply
- stage: build
  jobs:
  - job: build
    steps:
    - task: build-app
- stage: infra-2
  dependsOn:
  - infrastructure-1
  variables:
    outputURL: $[stageDependencies.infra-1.deploy-common-infra.outputs['getLogicAppURL.outputURL']]
  jobs:
  - job: deploy-infra
    steps:
    - template: logic-app.yaml
    - script: |
        cd infra
        terraform init
        terraform plan -var="logicAppUrl='$(outputURL)'"
        terraform apply

The logic-app.yaml template has the script to add the url to the variable file, which is stored in a temporary storage location on the Agent used by DevOps:

steps:
- script: |
    $logicApp = Get-AzLogicAppTriggerCallbackUrl -ResourceGroupName "logic-app-rg" -Name "mylogicapp" -TriggerName "Manual"
    $url = $logicApp.Value
    Add-Content -Path "$directory/infra/vars.tfvars" -Value "`nslack_url=`"$url`""

It should be noted that terraform/vars.tfvars is the path in the repository that was being used for the pipeline. By adding this to the pipeline, it now runs successfully.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文