胶水工作成功,但没有插入目标表(Aurora mysql)的数据

发布于 2025-01-22 07:36:15 字数 1856 浏览 6 评论 0 原文

我使用以下“视觉”选项卡创建了一个胶水作业。首先,我将数据源连接到MySQL表中,该数据源已经在我的数据目录中。然后,在“转换节点”中,我编写了一个自定义SQL查询,以从源表中选择一个列。用数据预览功能验证,转换节点正常工作。现在,我想将数据写入现有数据库表,该数据库表只有一个带有“字符串”数据类型的列。胶水工作成功了,但我看不到表中的数据。

以下是胶合作业视觉生成的自动脚本。

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue import DynamicFrame


def sparkSqlQuery(glueContext, query, mapping, transformation_ctx) -> DynamicFrame:
    for alias, frame in mapping.items():
        frame.toDF().createOrReplaceTempView(alias)
    result = spark.sql(query)
    return DynamicFrame.fromDF(result, glueContext, transformation_ctx)


args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)

# Script generated for node MySQL
MySQL_node1650299412376 = glueContext.create_dynamic_frame.from_catalog(
    database="glue_rds_test",
    table_name="test_customer",
    transformation_ctx="MySQL_node1650299412376",
)

# Script generated for node SQL
SqlQuery0 = """
select CUST_CODE from customer
"""
SQL_node1650302847690 = sparkSqlQuery(
    glueContext,
    query=SqlQuery0,
    mapping={"customer": MySQL_node1650299412376},
    transformation_ctx="SQL_node1650302847690",
)

# Script generated for node MySQL
MySQL_node1650304163076 = glueContext.write_dynamic_frame.from_catalog(
    frame=SQL_node1650302847690,
    database="glue_rds_test",
    table_name="test_customer2",
    transformation_ctx="MySQL_node1650304163076",
)

job.commit()


I created a glue job using the visual tab like below. First I connected to a mysql table as data source which is already in my data catalog. Then in the transform node, I wrote a custom sql query to select only one column from the source table. Validated with the data preview feature and the transformation node works fine. Now I want to write the data to the existing database table that has only one column with 'string' data type. Glue job succeeded but I dont see the data in the table.

enter image description here
Below is the automatic script generated from Glue Job Visual.

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue import DynamicFrame


def sparkSqlQuery(glueContext, query, mapping, transformation_ctx) -> DynamicFrame:
    for alias, frame in mapping.items():
        frame.toDF().createOrReplaceTempView(alias)
    result = spark.sql(query)
    return DynamicFrame.fromDF(result, glueContext, transformation_ctx)


args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)

# Script generated for node MySQL
MySQL_node1650299412376 = glueContext.create_dynamic_frame.from_catalog(
    database="glue_rds_test",
    table_name="test_customer",
    transformation_ctx="MySQL_node1650299412376",
)

# Script generated for node SQL
SqlQuery0 = """
select CUST_CODE from customer
"""
SQL_node1650302847690 = sparkSqlQuery(
    glueContext,
    query=SqlQuery0,
    mapping={"customer": MySQL_node1650299412376},
    transformation_ctx="SQL_node1650302847690",
)

# Script generated for node MySQL
MySQL_node1650304163076 = glueContext.write_dynamic_frame.from_catalog(
    frame=SQL_node1650302847690,
    database="glue_rds_test",
    table_name="test_customer2",
    transformation_ctx="MySQL_node1650304163076",
)

job.commit()


如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

屋檐 2025-01-29 07:36:15

for 问题是问题SQL查询中选定字段的双重报价。丢弃使用双引号的使用解决了。在

例如,我“错误地”使用了此查询语法:

select "CUST_CODE" from customer

而不是此“正确”一个:

select CUST_CODE from customer

您的共享示例代码似乎没有此语法问题,但是我认为将答案放在这里可能有帮助其他的。

For me the problem was the double-quotes of the selected fields in the SQL query. Dropping the use of double quotes solved it. There is no mention of it in the Spark SQL Syntax documentation

For example, I "wrongly" used this query syntax:

select "CUST_CODE" from customer

instead of this "correct" one :

select CUST_CODE from customer

Your shared sample code does not seem to have this syntax issue, but I thought putting the answer here might be of a help to others.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文