> > Juergen seems offline or too busy to respond. Here's what he wrote on > > the matter. I guess he's reading the entire log into memory and > > updating it there. > > Jürgen is talking about the file event.log which MoinMoin writes. > This is not read into memory. New events are simply appended to > the file. > > Now since the Wiki has recursive links such as the "LikePages" > links on all pages and history links like the per page > info screen, a recursive wget is likely to run for quite a > while (even more because the URL level doesn't change much > and thus probably doesn't trigger any depth restrictions on wget- > like crawlers) and generate lots of events... > > What was the cause of the break down ? A full disk or a process > claiming all resources ? A process running out of memory, AFAIK. I just ran a recursive wget on the Wiki, and it completed without bringing the site down, downloading about 1000 files (several views for each Wiki page). I didn't see the Wiki appear in the "top" display. So either Juergen fixed the problem (as he said he did) or there was a different cause. I do wish Juergen responded to his mail. --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4