At some point, Benyang Tang <btang at pacific.jpl.nasa.gov> wrote: > Thanks. I added the lapack library to setup.py. It worked. The setup.py now looks like this: > > # delete all but the first one in this list if using your own LAPACK/BLAS > sourcelist = ['Src/lapack_litemodule.c',] > # set these to use your own BLAS > library_dirs_list = ['/usr/lib','/usr/local/lib','/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66'] > libraries_list = ['blas','lapack','g2c','m'] > > However, numpy does not get any speed gain by linking to the native blas/lapack. Here I tested the multiplication of 2 real*4 matrixes: > Multiplication of 100X100 matrixes takes 0.01 > Multiplication of 200X200 matrixes takes 0.11 > Multiplication of 300X300 matrixes takes 0.41 > Multiplication of 400X400 matrixes takes 1.47 > Multiplication of 500X500 matrixes takes 3.10 > Multiplication of 600X600 matrixes takes 5.51 > > The timing is roughly the same as that when linking to the compiled blas_lite. > Gee, really? Guess what: NumPy doesn't use BLAS for multiplying matrices :-(. It uses it's own implementation, which (on my machine) is about six times slower for 600x600 matrices. You'll have to write your own wrapper around BLAS for matrix multiplication. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm(at)physics(dot)mcmaster(dot)ca
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4