A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://mail.python.org/pipermail/python-list/2005-September/330911.html below:

Removing duplicates from a list

Removing duplicates from a listWill McGugan news at NOwillmcguganSPAM.com
Wed Sep 14 08:28:58 EDT 2005
Rubinho wrote:
> I've a list with duplicate members and I need to make each entry
> unique.
> 
> I've come up with two ways of doing it and I'd like some input on what
> would be considered more pythonic (or at least best practice).
> 
> Method 1 (the traditional approach)
> 
> for x in mylist:
>     if mylist.count(x) > 1:
>         mylist.remove(x)
> 
> Method 2 (not so traditional)
> 
> mylist = set(mylist)
> mylist = list(mylist)
> 
> Converting to a set drops all the duplicates and converting back to a
> list, well, gets it back to a list which is what I want.
> 
> I can't imagine one being much faster than the other except in the case
> of a huge list and mine's going to typically have less than 1000
> elements.  

I would imagine that 2 would be significantly faster. Method 1 uses 
'count' which must make a pass through every element of the list, which 
would be slower than the efficient hashing that set does. I'm also not 
sure about removing an element whilst iterating, I think thats a no-no.

Will McGugan
-- 
http://www.willmcgugan.com
"".join({'*':'@','^':'.'}.get(c,0) or chr(97+(ord(c)-84)%26) for c in 
"jvyy*jvyyzpthtna^pbz")

More information about the Python-list mailing list

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4