The difference is that reversed
is an iterator (it's also lazy-evaluating) and sorted
is a function that works "eagerly".
All built-in iterators (at least in python-3.x) like map
, zip
, filter
, reversed
, ... are implemented as classes. While the eager-operating built-ins are functions, e.g. min
, max
, any
, all
and sorted
.
>>> a = [1,2,3,4]
>>> r = reversed(a)
<list_reverseiterator at 0x2187afa0240>
You actually need to "consume" the iterator to get the values (e.g. list
):
>>> list(r)
[4, 3, 2, 1]
On the other hand this "consuming" part isn't needed for functions like sorted
:
>>> s = sorted(a)
[1, 2, 3, 4]
In the comments it was asked why these are implemented as classes instead of functions. That's not really easy to answer but I'll try my best:
Using lazy-evaluating operations has one huge benefit: They are very memory efficient when chained. They don't need to create intermediate lists unless they are explicitly "requested". That was the reason why map
, zip
and filter
were changed from eager-operating functions (python-2.x) to lazy-operating classes (python-3.x).
Generally there are two ways in Python to create iterators:
- classes that
return self
in their __iter__
method
- generator functions - functions that contain a
yield
However (at least CPython) implements all their built-ins (and several standard library modules) in C. It's very easy to create iterator classes in C but I haven't found any sensible way to create generator functions based on the Python-C-API. So the reason why these iterators are implemented as classes (in CPython) might just be convenience or the lack of (fast or implementable) alternatives.
There is an additional reason to use classes instead of generators: You can implement special methods for classes but you can't implement them on generator functions. That might not sound impressive but it has definite advantages. For example most iterators can be pickled (at least on Python-3.x) using the __reduce__
and __setstate__
methods. That means you can store them on the disk, and allows copying them. Since Python-3.4 some iterators also implement __length_hint__
which makes consuming these iterators with list
(and similar) much faster.
Note that reversed
could easily be implemented as factory-function (like iter
) but unlike iter
, which can return two unique classes, reversed
can only return one unique class.
To illustrate the possible (and unique) classes you have to consider a class that has no __iter__
and no __reversed__
method but are iterable and reverse-iterable (by implementing __getitem__
and __len__
):
class A(object):
def __init__(self, vals):
self.vals = vals
def __len__(self):
return len(self.vals)
def __getitem__(self, idx):
return self.vals[idx]
And while it makes sense to add an abstraction layer (a factory function) in case of iter
- because the returned class is depending on the number of input arguments:
>>> iter(A([1,2,3]))
<iterator at 0x2187afaed68>
>>> iter(min, 0) # actually this is a useless example, just here to see what it returns
<callable_iterator at 0x1333879bdd8>
That reasoning doesn't apply to reversed
:
>>> reversed(A([1,2,3]))
<reversed at 0x2187afaec50>