Why use np.random.randn as the initial value of the weight?

Asked 5 months ago, Updated 5 months ago, 16 views

Question 1

"When I was studying ""Deep Learning from scratch"" and reading 182p, I couldn't understand why I had set the initial value of weight W to np.random.randn until now."So please tell me why np.random.randn is the initial value of the weight.

Question 2
np.random.randn wrote on the net, "Returns random numbers that follow the normal distribution (standard normal distribution) of mean 0, variance 1 (standard deviation 1), but I don't understand what this means either."I tried executing the code as shown below, and I thought the average would be zero, but it didn't.Please tell me the meaning of this sentence.

 tu=np.random.randn (1,100)

tuuu = 0

for i in range (100):
# Output 0.22453386331188382 *It's different every time, I thought it wasn't zero on average.

python machine-learning neural-network

2022-09-30 11:02

2 Answers

Learn DeepLearning Chapter 6 from scratch ~About initial weight ~ article may be familiar
If the gradient reaches zero, learning will not progress, so it is important to provide an initial value to prevent gradient loss.

What you're learning about neural networks might also be helpful?

There's no such thing as rolling the dice and getting different eyes all five times, and the sixth time is the last one left.
After 99 random numbers, if the remaining one is just zero, you'll be like, "Numpy.random, work!"

import matplotlib.pyplot as plt
import numpy as np

num = [ ]
mean = [ ]
std = [ ]
for cnt in range (11200):

ax.scatter(num, mean, label='mean')
ax2 = ax.twinx()
ax2.scatter(num, std, color='r', label='std')

I made it as a trial.50,000 or 100,000 yen, smooth, average zero, approach variance 1 and go

Run Results

2022-09-30 11:02

You will find out by visiting this site.Someone answered by teratail.

2022-09-30 11:02

If you have any answers or tips

© 2023 OneMinuteCode. All rights reserved.