Daniel (ajax) Diniz wrote: > "Martin v. Löwis" wrote: >>> Now, getting into pie-in-the-sky territory, if someone (not logged in) >>> was to download all issues for scrapping and feeding to a local >>> database, what time of day would be less disastrous for the server? :) >> >> I think HTML scraping is a really bad idea. What is it that you >> specifically want to do with these data? > > For starters, free form searches, aggregation and filtering of > results. The web interface is pretty good for handling individual > issues, but not so good for adding someone as nosy to lots of issues. > > With some more time and effort, I'd be able to: > Organize a local workflow with tweaked UI > Send emails before they were done :D Use a VCS for in-progress activities Figure out how to serialize and submit the work done locally Share results with interested parties off-tracker (e.g., the auto-nosy mentioned up-thread) Right now, more efficient searching and aggregation along with some (local, flexible) UI tweaks sum it up. Maybe I can offer a patch for something like PyPI's 'simple' interface? Cheers, Daniel
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4