调整列值以插入 DataFrame

发布于 2025-01-10 13:55:16 字数 594 浏览 0 评论 0原文

在我的公司,我们有一个通过 pyodbc 连接到的 SQL 数据库。在这里,当我们运行查询时,pyodbc 返回一个元组列表。通常,这样的列表包含>10个元组。以下是我们得到的可能输出的示例:

OUTPUT =
[(datetime.datetime(2003, 3, 26, 15, 12, 15), '490002_space'),
 (datetime.datetime(2003, 3, 27, 16, 13, 14), '490002_space')] 

我希望删除我们收到的所有元组中的 '_space''datetime.datetime(...)' 部分在输出中。最终我希望将新的元组列表传递给 pandas 数据框。我希望得到您关于如何有效地将 OUTPUT 修改为 DESIRED_OUTPUT 的建议:

DESIRED_OUTPUT:
[('2003, 3, 26, 15, 12, 15', '490002'),
 ('2003, 3, 27, 16, 13, 14', '490002')] 

真的希望收到您的来信。

问候, 杰罗姆

At my company we have a SQL database that we connect to via pyodbc. Here, when we run a query, pyodbc returns a list of tuples. Typically, such a list contains >10 tuples. Here is an example of a possible output we get:

OUTPUT =
[(datetime.datetime(2003, 3, 26, 15, 12, 15), '490002_space'),
 (datetime.datetime(2003, 3, 27, 16, 13, 14), '490002_space')] 

My wish is to remove '_space' and 'datetime.datetime(...)' parts in all tuples we receive in OUTPUT. Eventually I hope to pass the new list of tuples to a pandas dataframe. I was hoping to get any of your advice on how to efficiently modify OUTPUT to DESIRED_OUTPUT:

DESIRED_OUTPUT:
[('2003, 3, 26, 15, 12, 15', '490002'),
 ('2003, 3, 27, 16, 13, 14', '490002')] 

Really hope to hear from you.

Greetings,
Jerome

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

就像说晚安 2025-01-17 13:55:16

这是一个可能的解决方案:

result = [(str(dt.timetuple()[:6])[1:-1], s.split('_')[0]) for dt, s in OUTPUT]

Here is a possible solution:

result = [(str(dt.timetuple()[:6])[1:-1], s.split('_')[0]) for dt, s in OUTPUT]
吾性傲以野 2025-01-17 13:55:16

最终我希望将新的元组列表传递给 pandas 数据框。

您可以使用 .read_sql_query() 将信息直接提取到 DataFrame 中:

import pandas as pd
import sqlalchemy as sa

connection_url = sa.engine.URL.create(
    "mssql+pyodbc",
    username="scott",
    password="tiger^5HHH",
    host="192.168.0.199",
    database="test",
    query={
        "driver": "ODBC Driver 18 for SQL Server",
        "TrustServerCertificate": "Yes",
    }
)

engine = sa.create_engine(connection_url)

table_name = "so71297370"

# set up example environment
with engine.begin() as conn:
    conn.exec_driver_sql(f"DROP TABLE IF EXISTS {table_name}")
    conn.exec_driver_sql(f"CREATE TABLE {table_name} (col1 datetime2, col2 nvarchar(50))")
    conn.exec_driver_sql(f"""\
        INSERT INTO {table_name} (col1, col2) VALUES
        ('2003-03-26 15:12:15', '490002_space'), 
        ('2003-03-27 16:13:14', '490002_space')
    """)

# example
df = pd.read_sql_query(
    # suffix '_space' is 6 characters in length
    f"SELECT col1, LEFT(col2, LEN(col2) - 6) AS col2 FROM {table_name}",
    engine,
)
print(df)
"""
                 col1    col2
0 2003-03-26 15:12:15  490002
1 2003-03-27 16:13:14  490002
"""

Eventually I hope to pass the new list of tuples to a pandas dataframe.

You can use .read_sql_query() to pull the information directly into a DataFrame:

import pandas as pd
import sqlalchemy as sa

connection_url = sa.engine.URL.create(
    "mssql+pyodbc",
    username="scott",
    password="tiger^5HHH",
    host="192.168.0.199",
    database="test",
    query={
        "driver": "ODBC Driver 18 for SQL Server",
        "TrustServerCertificate": "Yes",
    }
)

engine = sa.create_engine(connection_url)

table_name = "so71297370"

# set up example environment
with engine.begin() as conn:
    conn.exec_driver_sql(f"DROP TABLE IF EXISTS {table_name}")
    conn.exec_driver_sql(f"CREATE TABLE {table_name} (col1 datetime2, col2 nvarchar(50))")
    conn.exec_driver_sql(f"""\
        INSERT INTO {table_name} (col1, col2) VALUES
        ('2003-03-26 15:12:15', '490002_space'), 
        ('2003-03-27 16:13:14', '490002_space')
    """)

# example
df = pd.read_sql_query(
    # suffix '_space' is 6 characters in length
    f"SELECT col1, LEFT(col2, LEN(col2) - 6) AS col2 FROM {table_name}",
    engine,
)
print(df)
"""
                 col1    col2
0 2003-03-26 15:12:15  490002
1 2003-03-27 16:13:14  490002
"""
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文