On 6 Jun 2014 02:16, "Nikolaus Rath" <Nikolaus at rath.org> wrote: > > Nathaniel Smith <njs at pobox.com> writes: > > Such optimizations are important enough that numpy operations always > > give the option of explicitly specifying the output array (like > > in-place operators but more general and with clumsier syntax). Here's > > an example small-array benchmark that IIUC uses Jacobi iteration to > > solve Laplace's equation. It's been written in both natural and > > hand-optimized formats (compare "num_update" to "num_inplace"): > > > > https://yarikoptic.github.io/numpy-vbench/vb_vb_app.html#laplace-inplace > > > > num_inplace is totally unreadable, but because we've manually elided > > temporaries, it's 10-15% faster than num_update. > > Does it really have to be that ugly? Shouldn't using > > tmp += u[2:,1:-1] > tmp *= dy2 > > instead of > > np.add(tmp, u[2:,1:-1], out=tmp) > np.multiply(tmp, dy2, out=tmp) > > give the same performance? (yes, not as nice as what you're proposing, > but I'm still curious). Yes, only the last line actually requires the out= syntax, everything else could use in place operators instead (and automatic temporary elision wouldn't work for the last line anyway). I guess whoever wrote it did it that way for consistency (and perhaps in hopes of eking out a tiny bit more speed - in numpy currently the in-place operators are implemented by dispatching to function calls like those). Not sure how much difference it really makes in practice though. It'd still be 8 statements and two named temporaries to do the work of one infix expression, with order of operations implicit. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20140606/a3e0d447/attachment-0001.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4