Azure函数blob触发未启动

发布于 2025-02-06 20:49:13 字数 1583 浏览 2 评论 0原文

我具有带有斑点触发器的Azure功能。功能应每天触发一次,但有时起作用。我发现,当文件到达用作触发器的路径时,它不会唤醒,它不会启动,也没有处理文件。如果我转到门户并刷新函数,它将开始工作并处理所有排队的文件。
有没有办法使函数触发器没有“刷新”它?
功能代码是用Python编写的,它是使用Azure Devops管道中的Azure创建的。
我附加host.json配置有关更多详细信息:

{
  "version": "2.0",
  "logging": {
    "fileLoggingMode": "always",
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true
      }
    }
  },
  "extensions": {
    "queues": {
      "maxPollingInterval": "00:00:02",
      "visibilityTimeout" : "00:00:30",
      "batchSize": 8,
      "maxDequeueCount": 5,
      "newBatchThreshold": 4,
      "messageEncoding": "base64"
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[3.3.0, 4.0.0)"
  },
  "functionTimeout": "-1",
  "retry": {
    "strategy": "fixedDelay",
    "maxRetryCount": 0,
    "delayInterval": "00:00:05"
  }
    
}

下面您可以找到带有触发详细信息的函数文件

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "inStr",
      "type": "blobTrigger",
      "direction": "in",
      "path": "container/path_to_file/{name}",
      "connection": "AzureWebJobsStorage"
    }
  ]
}

,这是Python函数代码的一部分,文件 init .py .py

def main(in_str):
    uri = in_str.uri
    split_uri = uri.replace("//", "/").split("/")
    source_bucket = split_uri[2]
    source_key = '/'.join(split_uri[3:])
    source_directory = '/'.join(split_uri[3:-1])
    file_name = source_key.split('/')[-1]
    print(f"File name is {file_name}")

I have an azure function with blob trigger. Function should trigger once a day but it works sometimes. I found that is function is not awake when file arrives to path used as trigger, it doesn't launch and file is not processed. If I go to portal and refresh the function, it start working and processes all queued files.
Is there a way to make the function triggers without "refreshing" it?
Function code is written in python and it is created in azure using azure devops pipeline.
I attach host.json configuration for more details:

{
  "version": "2.0",
  "logging": {
    "fileLoggingMode": "always",
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true
      }
    }
  },
  "extensions": {
    "queues": {
      "maxPollingInterval": "00:00:02",
      "visibilityTimeout" : "00:00:30",
      "batchSize": 8,
      "maxDequeueCount": 5,
      "newBatchThreshold": 4,
      "messageEncoding": "base64"
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[3.3.0, 4.0.0)"
  },
  "functionTimeout": "-1",
  "retry": {
    "strategy": "fixedDelay",
    "maxRetryCount": 0,
    "delayInterval": "00:00:05"
  }
    
}

Below you can find function.json file with trigger details

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "inStr",
      "type": "blobTrigger",
      "direction": "in",
      "path": "container/path_to_file/{name}",
      "connection": "AzureWebJobsStorage"
    }
  ]
}

And this is part of the code of python function, file init.py

def main(in_str):
    uri = in_str.uri
    split_uri = uri.replace("//", "/").split("/")
    source_bucket = split_uri[2]
    source_key = '/'.join(split_uri[3:])
    source_directory = '/'.join(split_uri[3:-1])
    file_name = source_key.split('/')[-1]
    print(f"File name is {file_name}")

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文