我们如何使用Python使用Service Principal将数据括号连接到SQL数据库?

发布于 2025-01-21 05:20:10 字数 840 浏览 1 评论 0原文

从Azure Databricks中,我想在SQL数据库中插入一些数据范围作为表。如何使用Python使用服务主体将Azure Databrick与Azure SQL数据库连接?

我搜索了类似的东西:

jdbcHostname = "..."
jdbcDatabase = "..."
jdbcPort = ...
jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)

connectionProperties = {
  "user" : "...",
  "password" : "...",
  "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}

但是发现与Python无关。我该怎么做?也许与下面的Pyspark一样?

hostname = "<servername>.database.windows.net"
server_name = "jdbc:sqlserver://{0}".format(hostname)

database_name = "<databasename>"
url = server_name + ";" + "databaseName=" + database_name + ";"
print(url)

table_name = "<tablename>"
username = "<username>"
password = dbutils.secrets.get(scope='', key='passwordScopeName')

From Azure Databricks I would like to insert some dataframes as tables in a sql database. How can I do to connect Azure Databricks with Azure SQL Database using service principal with python ?

I searched something similar with:

jdbcHostname = "..."
jdbcDatabase = "..."
jdbcPort = ...
jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)

connectionProperties = {
  "user" : "...",
  "password" : "...",
  "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}

But found nothing to do with Python. How can I do it ? Maybe with pyspark like below ?

hostname = "<servername>.database.windows.net"
server_name = "jdbc:sqlserver://{0}".format(hostname)

database_name = "<databasename>"
url = server_name + ";" + "databaseName=" + database_name + ";"
print(url)

table_name = "<tablename>"
username = "<username>"
password = dbutils.secrets.get(scope='', key='passwordScopeName')

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

握住我的手 2025-01-28 05:20:10

要连接到 Azure SQL数据库 ,您将需要安装 sql spark连接器 microsoft azure azure acip active active active Directory Directry Direction for Python

转到数据键群中的群集并安装
com.microsoft.azure:Spark-mssql-connector_2.12_3.0:1.0.0.0.0--alpha,来自maven /em>来自 pypi
确保分配了客户ID和秘密的密钥保险库

Azure SQL

应用需要 登录 azure sql 的许可才能访问对象。 创建服务原理 以对象,然后在基础对象上授予其权限,在下面的示例中,我授予了DBO Schema的服务主体选择权限。 代码样本在参考

“

我们还将在数据库中创建一个表

Azure SQL片段:

“

”

参考:

https://www.thedataswamp.com/blog/databricks-connect-connect-to-azure-sql-with-with-with-with-with-with-service-service-principal-principal

To connect to Azure SQL Database, you will need to install the SQL Spark Connector and the Microsoft Azure Active Directory Authentication Library for Python.

Go to your cluster in Databricks and Install
com.microsoft.azure:spark-mssql-connector_2.12_3.0:1.0.0-alpha from Maven And adal from PyPI.
Make sure both the client Id and secret to Key Vault are assigned

Azure SQL

The app needs permission to login into Azure SQL to access the object. Create service principle to object then grant it permissions on the underlying objects, In the example below I have given the service principal select permission on the dbo schema. The code samples are at the Reference.

Ref1

We will also create a table in the database

Ref1

Azure SQL Snippet:

Ref3

Ref4

Reference:

https://www.thedataswamp.com/blog/databricks-connect-to-azure-sql-with-service-principal

https://learn.microsoft.com/en-us/sql/connect/spark/connector?view=sql-server-ver15

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文