A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://mail.python.org/pipermail/python-dev/2000-September/009294.html below:

[Python-Dev] find_recursionlimit.py vs. libpthread vs. linux

[Python-Dev] find_recursionlimit.py vs. libpthread vs. linuxCharles G Waldman cgw@fnal.gov
Mon, 11 Sep 2000 13:55:09 -0500 (CDT)
It has been noted by people doing testing on Linux systems that

ulimit -s unlimited
python Misc/find_recursionlimit.py

will run for a *long* time if you have built Python without threads, but
will die after about 2400/2500 iterations if you have built with
threads, regardless of the "ulimit" setting.

I had thought this was evidence of a bug in Pthreads.  In fact
(although we still have other reasons to suspect Pthread bugs),
the behavior is easily explained.  The function "pthread_initialize"
in pthread.c contains this very lovely code:

  /* Play with the stack size limit to make sure that no stack ever grows
     beyond STACK_SIZE minus two pages (one page for the thread descriptor
     immediately beyond, and one page to act as a guard page). */
  getrlimit(RLIMIT_STACK, &limit);
  max_stack = STACK_SIZE - 2 * __getpagesize();
  if (limit.rlim_cur > max_stack) {
    limit.rlim_cur = max_stack;
    setrlimit(RLIMIT_STACK, &limit);
  }

In "internals.h", STACK_SIZE is #defined to (2 * 1024 * 1024)

So whenever you're using threads, you have an effective rlimit of 2MB
for stack, regardless of what you may *think* you have set via 
"ulimit -s"

One more mystery explained!






RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4