Why can supercomputers calculate at high speeds?
Supercomputers recognize that many CPUs are juxtaposed to generate speed.
But I don't know why it's so fast. The reason is Amdar's law
If the program is accelerated by parallel calculation by using a plurality of processors, the sequential part of the program is restricted.For example, no matter how many processors you increase, 95% of the programs can be parallelized, it will not speed up more than 20 times as fast as shown in the figure.
I'm going to have to ignore this rule. What principle is it?
When you consider the execution time of a program to be the execution time of a parallelizable portion + the execution time of a sequential processing portion, paragraph 1 can be reduced by parallelization and theoretically approximated to zero by setting the number of parallels to infinity.
Suppose the processing time without parallelization is 100 and the sequential processing part is 5% of that.
No parallelization 95+5=100
5 Parallel 95/10+5=19+5=24 (4.2x faster)
20 Parallel 95/10+5=9.5+5=14.5 (6.9x faster)
1000 Parallel 95/1000+5=0.1+5=5.1 (19.6x faster)
100000 Parallel 95/100000 + 5 55 (20x Faster)
So, with 5% sequential execution, you can only speed up to 20 times.
Ideally, 1% of sequential execution units can be 100x faster, or 0.1% can be 1000x faster.
To reduce the percentage of sequential executives, you can increase the parallel executives instead of decreasing them sequentially.
Suppose you have a problem finding values for different x and n.
If x=1,1.01,1.02,...1000n=1,2,3,...100000, you can increase the parallel execution part by approximately 10000 times, which is equivalent to 1/10000 successively.
"In actual numerical calculations and simulations, ""more repetition"" and ""more numerical calculations"" are more accurate, so there is also a need for faster parallel numbers."
© 2023 OneMinuteCode. All rights reserved.