辍学成本在1500年后变成nan
我试图以Andrew-ng Deep Learning Course-1 Week-4格式的格式实现 辍学模型 。使用课程2周1周的数据。
- “ dropout_project.ipynb” 是主要项目文件,当我在1500迭代后运行它时,成本变为nan
- “ deep_nn.py”和“ dropout_and_end_regularization.py”是辅助功能文件。
- 我已经测试了所有错误的实现
- ,也有一个疑问,“ D”变量是否会更改每次迭代的每个迭代或固定常数。在我的实施中,我保留了D1和D2的值,以通过在迭代开始时召回NP.Random.seed(1)。
请有人帮我 github数据库链接
def dropout_model(x, y,keep_probs = 0.86, learning_rate = 0.3, numm_iterations = 30000, print_cost = True):
# 14% neurons will be dropped
grads = {}
costs = []
m = x.shape[1]
layer_dims = [x.shape[0], 20, 3,1]
parameters = nn.initialize_paraeters_deep(layer_dims)
for i in range(numm_iterations):
np.random.seed(1)
aL, caches, d_list = dr.dropout_L_model_forward(x, parameters ,keep_probs)
cost = nn.cross_entropy_cost(aL, y)
grads = dr.dropout_L_model_backward(aL, y , caches, d_list, keep_probs)
parameters = nn.update_parameters(parameters, grads, learning_rate)
if i % 1000 == 0:
costs.append(cost)
if print_cost:
print(f'Cost after iteration {i} is {cost}')
plt.plot(costs)
plt.ylabel("Cost")
plt.xlabel("iteration (x1000)")
plt.title(f"Learning rate : {learning_rate}")
plt.show()
return parameters, costs
dropout_parameters, dropout_cost = dropout_model(x_train, y_train)
_ = nn.predict(x_train, y_train, dropout_parameters)
_ = nn.predict(x_test, y_test, dropout_parameters, "Test")
/I.sstatic.net/yvlpo.png“ rel =” nofollow noreferrer“>
I was trying to implement the dropout model in the format of Andrew-NG deep learning course-1 week-4 format. Using data from course-2 week-1.
- “dropout_project.ipynb” is the main project file where when I ran it after 1500 iterations, the cost become nan
- “deep_nn.py” and “dropout_and_regularization.py” are the helper function file.
- I had tested my implementation for all the bugs
- And I have also one doubt, does the “d” variable change every iteration or fixed constant for every iteration. In my implementation I have kept the value of d1 and d2 to be fixed by recalling np.random.seed(1) at the start of the iteration.
Please someone help me
Github Database Link
def dropout_model(x, y,keep_probs = 0.86, learning_rate = 0.3, numm_iterations = 30000, print_cost = True):
# 14% neurons will be dropped
grads = {}
costs = []
m = x.shape[1]
layer_dims = [x.shape[0], 20, 3,1]
parameters = nn.initialize_paraeters_deep(layer_dims)
for i in range(numm_iterations):
np.random.seed(1)
aL, caches, d_list = dr.dropout_L_model_forward(x, parameters ,keep_probs)
cost = nn.cross_entropy_cost(aL, y)
grads = dr.dropout_L_model_backward(aL, y , caches, d_list, keep_probs)
parameters = nn.update_parameters(parameters, grads, learning_rate)
if i % 1000 == 0:
costs.append(cost)
if print_cost:
print(f'Cost after iteration {i} is {cost}')
plt.plot(costs)
plt.ylabel("Cost")
plt.xlabel("iteration (x1000)")
plt.title(f"Learning rate : {learning_rate}")
plt.show()
return parameters, costs
dropout_parameters, dropout_cost = dropout_model(x_train, y_train)
_ = nn.predict(x_train, y_train, dropout_parameters)
_ = nn.predict(x_test, y_test, dropout_parameters, "Test")
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
NAN成本的原因有两个。
都会出现成本和dal的错误
There are two reasons for the cost going NaN.
which gives error in both cost and daL