在另一个DF中使用架构定义中的DF模式?
我有一个带有这样的架构的火花DF:
print(df.schema)
StructType(List(StructField(column_info,ArrayType(StructType(List(StructField(column_datatype,StringType,true),StructField(column_description,StringType,true),StructField(column_length,StringType,true),StructField(column_name,StringType,true),StructField(column_personally_identifiable_information,StringType,true),StructField(column_precision,StringType,true),StructField(column_primary_key,StringType,true),StructField(column_scale,StringType,true),StructField(column_security_classifications,ArrayType(StringType,true),true),StructField(column_sequence_number,StringType,true))),true),true),StructField(file_code_page,StringType,true),StructField(file_delimiter,StringType,true),StructField(file_description,StringType,true),StructField(file_end_of_line_char,StringType,true),StructField(file_extension,StringType,true),StructField(file_footer_rows,StringType,true),StructField(file_header_rows,StringType,true),StructField(file_name,StringType,true),StructField(logs_id,StringType,true),StructField(metadata_version,StringType,true),StructField(oar_id,StringType,true),StructField(schema_version,StringType,true)))
我想在另一个DF中使用此模式。为此,我手动调整以采用这种格式:
mdata_schema = StructType([\
StructField('column_info',ArrayType(StructType([\
StructField('column_datatype',StringType(),True),\
StructField('column_description',StringType(),True),\
StructField('column_length',StringType(),True),\
StructField('column_name',StringType(),True),\
StructField('column_personally_identifiable_information',StringType(),True),\
StructField('column_precision',StringType(),True),\
StructField('column_primary_key',StringType(),True),\
StructField('column_scale',StringType(),True),\
StructField('column_security_classifications',ArrayType(StringType(),True),True),\
StructField('column_sequence_number',StringType(),True)]),True),True),\
StructField('file_code_page',StringType(),True),\
StructField('file_delimiter',StringType(),True),\
StructField('file_description',StringType(),True),\
StructField('file_end_of_line_char',StringType(),True),\
StructField('file_extension',StringType(),True),\
StructField('file_footer_rows',StringType(),True),\
StructField('file_header_rows',StringType(),True),\
StructField('file_name',StringType(),True),\
StructField('logs_id',StringType(),True),\
StructField('metadata_version',StringType(),True),\
StructField('oar_id',StringType(),True),\
StructField('schema_version',StringType(),True)\
])
有没有办法避免此手动调整?方法中是否有可以提取架构的方法,以便我可以使用另一个DF使用它?
I have a spark df with a schema like this:
print(df.schema)
StructType(List(StructField(column_info,ArrayType(StructType(List(StructField(column_datatype,StringType,true),StructField(column_description,StringType,true),StructField(column_length,StringType,true),StructField(column_name,StringType,true),StructField(column_personally_identifiable_information,StringType,true),StructField(column_precision,StringType,true),StructField(column_primary_key,StringType,true),StructField(column_scale,StringType,true),StructField(column_security_classifications,ArrayType(StringType,true),true),StructField(column_sequence_number,StringType,true))),true),true),StructField(file_code_page,StringType,true),StructField(file_delimiter,StringType,true),StructField(file_description,StringType,true),StructField(file_end_of_line_char,StringType,true),StructField(file_extension,StringType,true),StructField(file_footer_rows,StringType,true),StructField(file_header_rows,StringType,true),StructField(file_name,StringType,true),StructField(logs_id,StringType,true),StructField(metadata_version,StringType,true),StructField(oar_id,StringType,true),StructField(schema_version,StringType,true)))
I want to use this schema in another df. To do so, I adjust manually to have this format:
mdata_schema = StructType([\
StructField('column_info',ArrayType(StructType([\
StructField('column_datatype',StringType(),True),\
StructField('column_description',StringType(),True),\
StructField('column_length',StringType(),True),\
StructField('column_name',StringType(),True),\
StructField('column_personally_identifiable_information',StringType(),True),\
StructField('column_precision',StringType(),True),\
StructField('column_primary_key',StringType(),True),\
StructField('column_scale',StringType(),True),\
StructField('column_security_classifications',ArrayType(StringType(),True),True),\
StructField('column_sequence_number',StringType(),True)]),True),True),\
StructField('file_code_page',StringType(),True),\
StructField('file_delimiter',StringType(),True),\
StructField('file_description',StringType(),True),\
StructField('file_end_of_line_char',StringType(),True),\
StructField('file_extension',StringType(),True),\
StructField('file_footer_rows',StringType(),True),\
StructField('file_header_rows',StringType(),True),\
StructField('file_name',StringType(),True),\
StructField('logs_id',StringType(),True),\
StructField('metadata_version',StringType(),True),\
StructField('oar_id',StringType(),True),\
StructField('schema_version',StringType(),True)\
])
Is there a way to avoid this manual adjustment? Is there a build in method that I can extract the schema so I can use it automaticallin another df?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
正如其他人所说的那样,如果您在笔记本中可以访问该数据帧,则可以读取df.schema.fields.fields.fields in typlese并将其用作架构,否则您可以使用以下函数为第一个数据框架生成字符串,并将输出字符串用作第二个数据框的模式
As others said if you have that dataframe accessible in your notebook then you can read the df.schema.fields in a structype and use that as schema, else you can use the below function to generate string for first data frame and use output string as schema for second dataframe