I am currently learning deep learning and I am using Python tensorflow.

As for tensorflow, if you give the loss to the optimizer, it will be optimized on its own, but could you see and take out the value of w and gradient in the middle of optimization?

Also, general optimization is fine as long as you know the gradient of weight, but I want to find the gradient of input for loss. Optimizer probably won't, so I think I can manage it using the gradients() function, but I don't understand it well.

Thank you for your cooperation.

deep-learning tensorflow

2022-09-30 13:56

```
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate)
grad,var=optimizer.compute_gradients(cost)
train_op=optimizer.apply_gradients([grad,var])
```

You can extract the gradient as grad.

2022-09-30 13:56

```
params=tf.trainable_variables()
gradients = tf.gradients (loss, params)
tf.AdamOptimzer(0.1).apply_gradients(zip(gradients, params), global_step)
```

After finding the gradient like this, you can optimize it.

If you want to see weight or bias,

```
w=tf.get_variable("weight", [input_dim, output_dim], tf.float32)
b=tf.get_variable("bias", [output_dim], tf.float32)
print session.run([w,b])
```

You can see the weight and bias values as shown in .

2022-09-30 13:56

Popular Tags

python x 4628
android x 1593
java x 1493
javascript x 1425
c x 924
c++ x 877
ruby-on-rails x 696
php x 692
python3 x 683
html x 656

© 2023 OneMinuteCode. All rights reserved.