Azure函数blob触发未启动
我具有带有斑点触发器的Azure功能。功能应每天触发一次,但有时起作用。我发现,当文件到达用作触发器的路径时,它不会唤醒,它不会启动,也没有处理文件。如果我转到门户并刷新函数,它将开始工作并处理所有排队的文件。
有没有办法使函数触发器没有“刷新”它?
功能代码是用Python编写的,它是使用Azure Devops管道中的Azure创建的。
我附加host.json
配置有关更多详细信息:
{
"version": "2.0",
"logging": {
"fileLoggingMode": "always",
"applicationInsights": {
"samplingSettings": {
"isEnabled": true
}
}
},
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout" : "00:00:30",
"batchSize": 8,
"maxDequeueCount": 5,
"newBatchThreshold": 4,
"messageEncoding": "base64"
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
},
"functionTimeout": "-1",
"retry": {
"strategy": "fixedDelay",
"maxRetryCount": 0,
"delayInterval": "00:00:05"
}
}
下面您可以找到带有触发详细信息的函数文件
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "inStr",
"type": "blobTrigger",
"direction": "in",
"path": "container/path_to_file/{name}",
"connection": "AzureWebJobsStorage"
}
]
}
,这是Python函数代码的一部分,文件 init .py .py
def main(in_str):
uri = in_str.uri
split_uri = uri.replace("//", "/").split("/")
source_bucket = split_uri[2]
source_key = '/'.join(split_uri[3:])
source_directory = '/'.join(split_uri[3:-1])
file_name = source_key.split('/')[-1]
print(f"File name is {file_name}")
I have an azure function with blob trigger. Function should trigger once a day but it works sometimes. I found that is function is not awake when file arrives to path used as trigger, it doesn't launch and file is not processed. If I go to portal and refresh the function, it start working and processes all queued files.
Is there a way to make the function triggers without "refreshing" it?
Function code is written in python and it is created in azure using azure devops pipeline.
I attach host.json
configuration for more details:
{
"version": "2.0",
"logging": {
"fileLoggingMode": "always",
"applicationInsights": {
"samplingSettings": {
"isEnabled": true
}
}
},
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout" : "00:00:30",
"batchSize": 8,
"maxDequeueCount": 5,
"newBatchThreshold": 4,
"messageEncoding": "base64"
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
},
"functionTimeout": "-1",
"retry": {
"strategy": "fixedDelay",
"maxRetryCount": 0,
"delayInterval": "00:00:05"
}
}
Below you can find function.json file with trigger details
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "inStr",
"type": "blobTrigger",
"direction": "in",
"path": "container/path_to_file/{name}",
"connection": "AzureWebJobsStorage"
}
]
}
And this is part of the code of python function, file init.py
def main(in_str):
uri = in_str.uri
split_uri = uri.replace("//", "/").split("/")
source_bucket = split_uri[2]
source_key = '/'.join(split_uri[3:])
source_directory = '/'.join(split_uri[3:-1])
file_name = source_key.split('/')[-1]
print(f"File name is {file_name}")
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论