> The problem arises because we're systematically unbalancing the hash > table. The iterator returns the first valid entry in the hash table, > which we remove. Repeat several times and pretty soon the iterator has > to spend O(n) time scanning through empty entries to find the first > remaining valid entry. Interesting. Something goes wrong, it seems: if items get removed over and over again, I think the set should shrink (not sure whether it actually does). Then, I think you should end up with an amortized O(1) for selecting an element (assuming that the underlying hashes don't collide). Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4