I am currently learning deep learning and I am using Python tensorflow.
As for tensorflow, if you give the loss to the optimizer, it will be optimized on its own, but could you see and take out the value of w and gradient in the middle of optimization?
Also, general optimization is fine as long as you know the gradient of weight, but I want to find the gradient of input for loss. Optimizer probably won't, so I think I can manage it using the gradients() function, but I don't understand it well.
Thank you for your cooperation.deep-learning tensorflow
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate) grad,var=optimizer.compute_gradients(cost) train_op=optimizer.apply_gradients([grad,var])
You can extract the gradient as grad.
params=tf.trainable_variables() gradients = tf.gradients (loss, params) tf.AdamOptimzer(0.1).apply_gradients(zip(gradients, params), global_step)
After finding the gradient like this, you can optimize it.
If you want to see weight or bias,
w=tf.get_variable("weight", [input_dim, output_dim], tf.float32) b=tf.get_variable("bias", [output_dim], tf.float32) print session.run([w,b])
You can see the weight and bias values as shown in .
340 Scrap text information after the "View More" button when searching in the Yahoo! News search window
342 Memory layouts learned in theory don't work out as expected when actually printed as addresses.
346 Who developed the "avformat-59.dll" that comes with FFmpeg?
356 I want to create an array of sequences from "1" to a specified number.
© 2023 OneMinuteCode. All rights reserved.