We are currently calculating the inverse matrix required for simulation in python.

The data is stored in variable A of type ndarray,dtype=np.float64, where A is approximately 400x400 matrix.We found the inverse matrix using the following methods:

```
import numpy as np
Ainv=np.linalg.inv(A)
```

So far, I can calculate without any errors, but after this,

```
AAi=np.dot(A,Ainv)
AiA = np.dot (Ainv, A)
```

Calculating the product of the inverse matrix with the original matrix, such as , returns a completely different value (for example, `AAi[0][0]=6.68, AiA[0][0]=5.8e+15`

).

So, in advance,

```
A = A [:100,:100]
```

If you do the same calculation by dividing it into ranges, the items you want to see as 1 are 9.9997e-01,0 and the items you want to see are ***e-05 to ***e-11, and the accuracy of the digits is different (specifically, the first digit has not been matched since the number of elements exceeded 106 x 106.On the other hand, 80 x 80 is a unit matrix with a precision of about 6 digits.)

It is thought that the product of the original matrix and the inverse matrix are no longer unit matrices due to the increase in the number of elements in the matrix, but is there a way to find this accurately? It doesn't matter if it takes some time to calculate.

Environment:

python 2.7.9

numpy1.14.0 (lapack, blas already installed)

2022-09-30 14:21

In numerical calculations, not limited to Python calculations, direct finding of inverse matrices can increase the error.Theoretically, the matrix can be analyzed using the number of conditions, and in general, the larger the number of conditions, the greater the error that can be included in the inverse matrix.The number of conditions of the matrix that the questioner wants to ask for is 10⁸ 程度, but this is very large.Therefore, it is no wonder that the inverse matrix found in `np.linalg.inv(A)`

contains many errors.

Therefore, if a inverse matrix is required in the middle of a formula while performing numerical calculations, it is often done indirectly using LU decomposition instead of `inv()`

, but

In a simple example, scaling each component of a matrix can reduce the number of conditions.For example, from the Number Solution Quality: Conditions, Stability, and Error Analysis, matrix A below is a matrix with a large number of conditions (that is, il-conditioned).

of English Wikipedia has a comprehensive explanation.

Furthermore, it seems that sometimes it works if the inverse matrix is found after the singular value decomposition of the original matrix.I asked a question on the sister site Computational Science Stack Exchange and was taught this.See here.

These techniques may not always work, but depending on the nature of the problem, please try them.

(These are the pages linked in the sentence)

- Number of conditions --Wikipedia
- Why does numpy.linalg.solve() offer more precision matrix inversions than numpy.linalg.inv()? -- Stack Overflow
- Number Solution Quality: Conditions, Stability, and Error Analysis --Numerical Algorithms Group
- How to directly compute the reverse of any-conditioned dense matrix --Computational Science Stack Exchange

2022-09-30 14:21

It may depend on the nature of the data, but the result is almost 1,1 for a random number matrix with a uniform distribution as follows:

```
import numpy as np
A=np.random.rand (400,400)
Ainv=np.linalg.inv(A)
AAi=np.dot (A, Ainv)
AiA = np.dot (Ainv, A)
for i in range (400):
print(AAi[i][i], AiA[i][i])
```

Results

```
(0.9999999999984, 0.999999999999889)
(1.0000000000000198, 0.9999999999999999)
(0.9999999999999643, 1.0000000000002853)
(1.0000000000000118, 1.0000000000000686)
(1.0000000000001563, 0.9999999999999708)
(1.000000000000004, 1.0000000000000784)
hereinafter abbreviated
```

However, I have heard that inv() does not provide accuracy, and in that case, use solve().

```
Ainv=np.linalg.solve(A,np.eye(400))
```

I got almost one or one.Also, np.show_config() is as follows:I don't know if it will be helpful because it seems to be set at compile time, not at run time.I have installed lapack, blas, openblas, atlas, and so on.

```
lapack_info:
libraries = ['lapack', 'lapack' ]
library_dirs=['/usr/lib']
language=f77
wrapack_opt_info:
libraries=['lapack', 'lapack', 'blas', 'blas']
library_dirs=['/usr/lib']
language=c
define_macros=[('NO_ATLAS_INFO',1),('HAVE_CBLAS', None)]
openblas_lapack_info:
NOT AVAILABLE
blas_info:
libraries=['blas', 'blas']
library_dirs=['/usr/lib']
define_macros=[('HAVE_CBLAS', None)]
language=c
atlas_3_10_blas_threads_info:
NOT AVAILABLE
atlas_threads_info:
NOT AVAILABLE
atlas_3_10_threads_info:
NOT AVAILABLE
atlas_blas_info:
NOT AVAILABLE
atlas_3_10_blas_info:
NOT AVAILABLE
atlas_blas_threads_info:
NOT AVAILABLE
openblas_info:
NOT AVAILABLE
blas_mkl_info:
NOT AVAILABLE
blas_opt_info:
libraries=['blas', 'blas']
library_dirs=['/usr/lib']
language=c
define_macros=[('NO_ATLAS_INFO',1),('HAVE_CBLAS', None)]
atlas_info:
NOT AVAILABLE
atlas_3_10_info:
NOT AVAILABLE
wrapack_mkl_info:
NOT AVAILABLE
mkl_info:
NOT AVAILABLE
```

2022-09-30 14:21

Popular Tags

python x 4429
android x 1590
java x 1475
javascript x 1383
c x 903
c++ x 828
ruby-on-rails x 679
php x 678
python3 x 651
html x 631

© 2022 OneMinuteCode. All rights reserved.