Hey! I've noticed there's a significant slowdown in executing the same codes for obtaining steadystate
solutions in v4.7
vs v5.0
with iterative scipy solvers, especially while using use_rcm=True
. After spending some time, I believe the main key difference between both versions comes from permutation done in the _permute_rcm
, where indices are permuted with the indices obtained from the reverse_cuthill_mckee
function, and this prevents us from getting the preconditioner efficiently.
In v5.0
, for permuting, we use _data.permute.indices
for the Liouvillian matrix L
. This gives us different results from what we used in v4.7
, i.e.,sp_permute
with the same permuting indices.
As per my understanding, _data.permute.indices
performs somewhat the following -
A = L.copy() A.indices = perm.take(A.indices) A = A.tocsc(A) A.indices = perm.take(A.indices)
where, we previously, the following was being done
with perm
are obtained from reverse_cuthill_mckee
.
I am not sure which behavior is the correct one, but the performance degradation is quite a lot (more than 10x) and prevents reaching sufficient enough tolerance.
Code to Reproduce the Bugimport numpy as np from qutip import (about, destroy, qeye, steadystate, tensor) # Paramaeters # ----------- Nc, Nm = 4, 30 E, kappa = 0.1, 0.3 gamma, delta = 3e-4, -0.43 # Operators # ---------- a = tensor(destroy(Nc), qeye(Nm)) b = tensor(qeye(Nc), destroy(Nm)) # Hamiltonian # ------------ H = -delta * (a.dag() * a) + (b.dag() * b) + 2.4 * kappa * (b.dag() + b) * (a.dag() * a) + E * (a.dag() + a) # Collapse operators # ------------------- cc = np.sqrt(kappa) * a cm = np.sqrt(2 * gamma) * b cp = np.sqrt(gamma) * b.dag() c_ops = [cc, cm, cp] precond_options = {'permc_spec': 'NATURAL', 'diag_pivot_thresh': 0.1, 'fill_factor': 100, 'options': {'ILU_MILU': 'smilu_2'}} solver_options = {"use_precond": False, "atol": 1e-10, **precond_options} rho_ss = steadystate(H, c_ops, method="direct", solver="gmres", use_rcm=True, **solver_options)Code Output
RuntimeError: scipy.sparse.linalg.gmres error: Tolerance was not reachedExpected Behaviour
Expected tolerance should be reached as we get in v4.7
like.
Quantum object: dims = [[4, 30], [4, 30]], shape = (120, 120), type = oper, isherm = TrueYour Environment
QuTiP Version: 5.0.2 Numpy Version: 1.26.4 Scipy Version: 1.12.0 Cython Version: None Matplotlib Version: 3.9.0 Python Version: 3.10.4 Number of CPUs: 8 BLAS Info: Generic INTEL MKL Ext: False Platform Info: Darwin (arm64)Additional Context
For more context, I noticed this behaviour while working on this PR.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4