如何使用Google功能连接到Google存储

发布于 2025-02-13 21:50:38 字数 1377 浏览 3 评论 0原文

我正在尝试执行一项快速解析和清洁数据的快速自动化任务,例如:

  • 当数据输入bucket1函数会自动启动
  • 数据时,将被解析,清洁,如果我这样说,
  • 数据将保存到bucket2,

所以我选择了Google功能触发了通过Bucket1的更改,但是现在我无法访问Bucket1的文件,知道我在做什么错?

import pandas as pd
import glob
from google.cloud import storage
storage_client = storage.Client(project='MyProjectName')

paths = []
all_dfs = []

def hello_gcs(event, context):
    #"""Triggered by a change to a Cloud Storage bucket.
    #Args:
    #     event (dict): Event payload.
    #     context (google.cloud.functions.Context): Metadata for the event.
    #"""
    #file = event
    #print(f"Processing file: {file['name']}.")

    for files in glob.glob("gs:/BUCKET1/*/*.csv"):
        paths.append(files)
    print(paths)
    print("testingtesting")
    for i in range(len(paths)):

    temp_list = (paths[i].split("_"))
    date_temp_list = (temp_list[2])
    read_date = (date_temp_list.split("T")[0])

    globals()['table%s' % i] = pd.read_csv('{}' .format(paths[i]), index_col=None, header=0) #create new dfs based on subfolder structure
    globals()['table%s' % i]["Read Date"] = read_date
    all_dfs.append(globals()['table%s' % i])

output_df = pd.concat(all_dfs, axis=0, ignore_index=True)
output_df.to_csv("gs:/BUCKET2/Filename.csv")

该代码在Jupyter笔记本电脑中本地执行,但是在Google功能中执行时,它甚至不会加载“路径”(因此我是在print(路径)检查(路径)并返回一个空列表。如何更好地访问GS?

I am trying to do a quick automated task of parsing and cleaning data such as:

  • When data enters BUCKET1 function is automatically starting
  • Data gets parsed, cleaned, ETLed if I may say so,
  • Data gets saved to BUCKET2

So I have chose google function triggered by changes in Bucket1, but now I am unable to access files from Bucket1, any idea what am I doing wrong?

import pandas as pd
import glob
from google.cloud import storage
storage_client = storage.Client(project='MyProjectName')

paths = []
all_dfs = []

def hello_gcs(event, context):
    #"""Triggered by a change to a Cloud Storage bucket.
    #Args:
    #     event (dict): Event payload.
    #     context (google.cloud.functions.Context): Metadata for the event.
    #"""
    #file = event
    #print(f"Processing file: {file['name']}.")

    for files in glob.glob("gs:/BUCKET1/*/*.csv"):
        paths.append(files)
    print(paths)
    print("testingtesting")
    for i in range(len(paths)):

    temp_list = (paths[i].split("_"))
    date_temp_list = (temp_list[2])
    read_date = (date_temp_list.split("T")[0])

    globals()['table%s' % i] = pd.read_csv('{}' .format(paths[i]), index_col=None, header=0) #create new dfs based on subfolder structure
    globals()['table%s' % i]["Read Date"] = read_date
    all_dfs.append(globals()['table%s' % i])

output_df = pd.concat(all_dfs, axis=0, ignore_index=True)
output_df.to_csv("gs:/BUCKET2/Filename.csv")

This code executed locally in Jupyter Notebook works flawlessly, however when executed in Google Functions it does not even load 'paths' (hence I was checking by print(paths) and it returns an empty list. How can I access GS better then?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文