On 6/27/06, Jim Jewett <jimjjewett at gmail.com> wrote: > > On 6/27/06, Brett Cannon <brett at python.org> wrote: > > My worry with this is that by providing checking functions that just > return > > true or false that people will rely on those too much and have logic > errors > > in their check and let security holes develop. That is why the checking > > functions as they stand now are macros that do the error return for you. > > Using a macro that returns an Error is OK. (Well, from this > perspective; it might be a problem for reference leaks.) Shouldn't be as long as you put the call right after variable declarations and you don't do an PyObject creation at variable declaration time. I just want a single call that does my erroring out, instead of two > separate calls depending on whether the interpreter is trusted. Oh, you won't! You have the set call before you even start using the interpreter to define your restrictions; that has a return value to flag that you are trying to set restrictions on a trusted interpreter, and thus are trying to do somethign that just won't work. Then you have the check functions that run in *any* interpreter. If you happen to be running in a trusted interpreter, then they do nothing; basically a NOOP and allow execution to continue. But if you are running an untrusted interpreter, the check is performed. Does that make sense? In running code within an interpreter there is no trusted/untrusted distinction when it comes to using checking functions. The distinction only exists outside the interpreter before you begin using it. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060627/5e1a508a/attachment.htm
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4