在复制基于形状的可解释性的keras-lstm示例时出错
我正在尝试复制 Keras-LSTM DeepExplainer 示例。我在尝试计算 shap 值时收到以下错误:
此警告:不再支持 keras,请改用 tf.keras。 您的 TensorFlow 版本高于 2.4.0,因此在 eager 模式下删除了图支持,并且可能不支持某些静态图。请参阅 PR #1483 进行讨论。
这个错误:
类型错误回溯(最近一次调用最后)
在 1个进口形状 2 解释器 = shap.DeepExplainer(模型, x_train[:100]) ----> 3 shap_values=explainer.shap_values(x_test[:10])
~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/init.py 在 shap_values(自我,X,ranked_outputs,output_rank_order, 检查可加性) 122 人被选为“顶级”。 第123章 --> [第 124 章]
~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py 在 shap_values(自我,X,ranked_outputs,output_rank_order, 检查可加性) 306 # 运行归因计算图 [第 307 章] --> [第 308 章] 联合输入)309 [第 310 章]
~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py 运行中(自身,输出,model_inputs,X) 第363章 364返回final_out --> [第 365 章] 第366章 [第 367 章]
~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py 在execute_with_overridden_gradients(self, f)中 [第 399 章] 第 399 章 400 次尝试: -->第401章 402 最后: [第 403 章]
~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py 在匿名()中 [第 356 章] 第357章 形状[0] = -1 -->第358章 第359章 360 input.append(v)
类型错误:“NoneType”对象无法解释为整数
我已经检查了PR#1483,但找不到相关修复。请建议成功复制该示例需要哪些 tensorflow、keras 和 shap 版本。
I am trying to replicate Keras-LSTM DeepExplainer example. I am getting the following error when trying to compute the shap values:
This warning: keras is no longer supported, please use tf.keras instead.
Your TensorFlow version is newer than 2.4.0 and so graph support has been removed in eager mode and some static graphs may not be supported. See PR #1483 for discussion.
And this error:
TypeError Traceback (most recent call last)
in
1 import shap
2 explainer = shap.DeepExplainer(model, x_train[:100])
----> 3 shap_values = explainer.shap_values(x_test[:10])~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/init.py
in shap_values(self, X, ranked_outputs, output_rank_order,
check_additivity)
122 were chosen as "top".
123 """
--> 124 return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py
in shap_values(self, X, ranked_outputs, output_rank_order,
check_additivity)
306 # run attribution computation graph
307 feature_ind = model_output_ranks[j,i]
--> 308 sample_phis = self.run(self.phi_symbolic(feature_ind), self.model_inputs,
joint_input) 309
310 # assign the attributions to the right part of the output arrays~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py
in run(self, out, model_inputs, X)
363
364 return final_out
--> 365 return self.execute_with_overridden_gradients(anon)
366
367 def custom_grad(self, op, *grads):~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py
in execute_with_overridden_gradients(self, f)
399 # define the computation graph for the attribution values using a custom gradient-like computation
400 try:
--> 401 out = f()
402 finally:
403 # reinstate the backpropagatable check~/miniconda3/envs/mtq/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py
in anon()
356 shape = list(self.model_inputs[i].shape)
357 shape[0] = -1
--> 358 data = X[i].reshape(shape)
359 v = tf.constant(data, dtype=self.model_inputs[i].dtype)
360 inputs.append(v)TypeError: 'NoneType' object cannot be interpreted as an integer
I have checked out the PR#1483, but couldn't find a relevant fix there. Please suggest on what tensorflow, keras, and shap versions are needed to successfully replicate the example.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论