使用dtreeviz可视化决策树分类时的路径错误

发布于 2025-02-10 16:25:13 字数 2175 浏览 2 评论 0原文

我正在尝试使用github中的代码可视化我的决策树分类,以下链接 https://github.com/parrt/dtreeviz/blob/master/notebooks/dtreeviz_spark_visalisation.ipynb 当我实施代码时:

df = spark.read.parquet("../../dtreeviz/testing/testlib/models/fixtures/spark_3_0_decision_tree_classifier.model/training_df")

我会收到以下错误:

AnalysisException                         Traceback (most 
recent call last)
~\AppData\Local\Temp/ipykernel_12920/640132816.py in <module>
----> 1 df = spark.read.parquet("../../dtreeviz/testing/testlib/models/fixtures/spark_3_0_decision_tree_classifier.model/training_df")

C:\spark\spark-3.2.1-bin-hadoop2.7\python\pyspark\sql\readwriter.py in parquet(self, *paths, 
**options)
    299                        int96RebaseMode=int96RebaseMode)
    300 
--> 301         return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
    302 
    303     def text(self, paths, wholetext=False, lineSep=None, pathGlobFilter=None,

C:\spark\spark-3.2.1-bin-hadoop2.7\python\lib\py4j-0.10.9.3-src.zip\py4j\java_gateway.py in 
__call__(self, *args)
   1319 
   1320         answer = self.gateway_client.send_command(command)
-> 1321         return_value = get_return_value(
   1322             answer, self.gateway_client, self.target_id, self.name)
   1323 

C:\spark\spark-3.2.1-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
    115                 # Hide where the exception came from that shows a non-Pythonic
    116                 # JVM exception message.
--> 117                 raise converted from None
    118             else:
    119                 raise

AnalysisException: Path does not exist: file:/C:/Users/dtreeviz/testing/testlib/models/fixtures/spark_3_0_decision_tree_classifier.model/t 
raining_df

我遵循此链接中的所有指令 https:// github.com/parrt/dtreeviz

我找不到本地计算机中的路径,因为我不熟悉Parquet格式,我对代码的作用感到困惑,它看起来像是一条路径,但是有什么.model 参考?

I am trying to visualise my decision tree classification using the code in GitHub in the following link https://github.com/parrt/dtreeviz/blob/master/notebooks/dtreeviz_spark_visualisations.ipynb
when I am implementing the code:

df = spark.read.parquet("../../dtreeviz/testing/testlib/models/fixtures/spark_3_0_decision_tree_classifier.model/training_df")

I am getting the following error:

AnalysisException                         Traceback (most 
recent call last)
~\AppData\Local\Temp/ipykernel_12920/640132816.py in <module>
----> 1 df = spark.read.parquet("../../dtreeviz/testing/testlib/models/fixtures/spark_3_0_decision_tree_classifier.model/training_df")

C:\spark\spark-3.2.1-bin-hadoop2.7\python\pyspark\sql\readwriter.py in parquet(self, *paths, 
**options)
    299                        int96RebaseMode=int96RebaseMode)
    300 
--> 301         return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
    302 
    303     def text(self, paths, wholetext=False, lineSep=None, pathGlobFilter=None,

C:\spark\spark-3.2.1-bin-hadoop2.7\python\lib\py4j-0.10.9.3-src.zip\py4j\java_gateway.py in 
__call__(self, *args)
   1319 
   1320         answer = self.gateway_client.send_command(command)
-> 1321         return_value = get_return_value(
   1322             answer, self.gateway_client, self.target_id, self.name)
   1323 

C:\spark\spark-3.2.1-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
    115                 # Hide where the exception came from that shows a non-Pythonic
    116                 # JVM exception message.
--> 117                 raise converted from None
    118             else:
    119                 raise

AnalysisException: Path does not exist: file:/C:/Users/dtreeviz/testing/testlib/models/fixtures/spark_3_0_decision_tree_classifier.model/t 
raining_df

I followed all the instructions in this link https://github.com/parrt/dtreeviz

I couldn't find the path in my local machine I am confused about what the code does as I am not familiar with Parquet format, it looks like a path but what does .model refer to?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

荒芜了季节 2025-02-17 16:25:13

我看笔记本。确实,它包含一些用于开发/测试的不必要的代码。

在您的情况下,实际可视化不需要“ DF”数据框。您可以对其进行评论,可视化应该起作用。

I took a look on the notebook. Indeed, it contains some unnecessary code which was used for development/testing.

In you case, the 'df' dataframe is not needed for actual visualisations. You can comment it and the visualisations should work.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文