I am currently learning deep learning and I am using Python tensorflow.

As for tensorflow, if you give the loss to the optimizer, it will be optimized on its own, but could you see and take out the value of w and gradient in the middle of optimization?

Also, general optimization is fine as long as you know the gradient of weight, but I want to find the gradient of input for loss. Optimizer probably won't, so I think I can manage it using the gradients() function, but I don't understand it well.

Thank you for your cooperation.

deep-learning tensorflow

2022-09-30 13:56

```
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate)
grad,var=optimizer.compute_gradients(cost)
train_op=optimizer.apply_gradients([grad,var])
```

You can extract the gradient as grad.

2022-09-30 13:56

```
params=tf.trainable_variables()
gradients = tf.gradients (loss, params)
tf.AdamOptimzer(0.1).apply_gradients(zip(gradients, params), global_step)
```

After finding the gradient like this, you can optimize it.

If you want to see weight or bias,

```
w=tf.get_variable("weight", [input_dim, output_dim], tf.float32)
b=tf.get_variable("bias", [output_dim], tf.float32)
print session.run([w,b])
```

You can see the weight and bias values as shown in .

2022-09-30 13:56

Popular Tags

python x 4429
android x 1590
java x 1475
javascript x 1383
c x 903
c++ x 830
ruby-on-rails x 680
php x 678
python3 x 651
html x 631

Popular Questions

© 2022 OneMinuteCode. All rights reserved.