Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
297 views
in Technique[技术] by (71.8m points)

python - Map each list value to its corresponding percentile

I'd like to create a function that takes a (sorted) list as its argument and outputs a list containing each element's corresponding percentile.

For example, fn([1,2,3,4,17]) returns [0.0, 0.25, 0.50, 0.75, 1.00].

Can anyone please either:

  1. Help me correct my code below? OR
  2. Offer a better alternative than my code for mapping values in a list to their corresponding percentiles?

My current code:

def median(mylist):
    length = len(mylist)
    if not length % 2:
        return (mylist[length / 2] + mylist[length / 2 - 1]) / 2.0
    return mylist[length / 2]

###############################################################################
# PERCENTILE FUNCTION
###############################################################################

def percentile(x):
    """
    Find the correspoding percentile of each value relative to a list of values.
    where x is the list of values
    Input list should already be sorted!
    """

    # sort the input list
    # list_sorted = x.sort()

    # count the number of elements in the list
    list_elementCount = len(x)

    #obtain set of values from list

    listFromSetFromList = list(set(x))

    # count the number of unique elements in the list
    list_uniqueElementCount = len(set(x))

    # define extreme quantiles
    percentileZero    = min(x)
    percentileHundred = max(x)

    # define median quantile
    mdn = median(x) 

    # create empty list to hold percentiles
    x_percentile = [0.00] * list_elementCount 

    # initialize unique count
    uCount = 0

    for i in range(list_elementCount):
        if x[i] == percentileZero:
            x_percentile[i] = 0.00
        elif x[i] == percentileHundred:
            x_percentile[i] = 1.00
        elif x[i] == mdn:
            x_percentile[i] = 0.50 
        else:
            subList_elementCount = 0
            for j in range(i):
                if x[j] < x[i]:
                    subList_elementCount = subList_elementCount + 1 
            x_percentile[i] = float(subList_elementCount / list_elementCount)
            #x_percentile[i] = float(len(x[x > listFromSetFromList[uCount]]) / list_elementCount)
            if i == 0:
                continue
            else:
                if x[i] == x[i-1]:
                    continue
                else:
                    uCount = uCount + 1
    return x_percentile

Currently, if I submit percentile([1,2,3,4,17]), the list [0.0, 0.0, 0.5, 0.0, 1.0] is returned.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I think your example input/output does not correspond to typical ways of calculating percentile. If you calculate the percentile as "proportion of data points strictly less than this value", then the top value should be 0.8 (since 4 of 5 values are less than the largest one). If you calculate it as "percent of data points less than or equal to this value", then the bottom value should be 0.2 (since 1 of 5 values equals the smallest one). Thus the percentiles would be [0, 0.2, 0.4, 0.6, 0.8] or [0.2, 0.4, 0.6, 0.8, 1]. Your definition seems to be "the number of data points strictly less than this value, considered as a proportion of the number of data points not equal to this value", but in my experience this is not a common definition (see for instance wikipedia).

With the typical percentile definitions, the percentile of a data point is equal to its rank divided by the number of data points. (See for instance this question on Stats SE asking how to do the same thing in R.) Differences in how to compute the percentile amount to differences in how to compute the rank (for instance, how to rank tied values). The scipy.stats.percentileofscore function provides four ways of computing percentiles:

>>> x = [1, 1, 2, 2, 17]
>>> [stats.percentileofscore(x, a, 'rank') for a in x]
[30.0, 30.0, 70.0, 70.0, 100.0]
>>> [stats.percentileofscore(x, a, 'weak') for a in x]
[40.0, 40.0, 80.0, 80.0, 100.0]
>>> [stats.percentileofscore(x, a, 'strict') for a in x]
[0.0, 0.0, 40.0, 40.0, 80.0]
>>> [stats.percentileofscore(x, a, 'mean') for a in x]
[20.0, 20.0, 60.0, 60.0, 90.0]

(I used a dataset containing ties to illustrate what happens in such cases.)

The "rank" method assigns tied groups a rank equal to the average of the ranks they would cover (i.e., a three-way tie for 2nd place gets a rank of 3 because it "takes up" ranks 2, 3 and 4). The "weak" method assigns a percentile based on the proportion of data points less than or equal to a given point; "strict" is the same but counts proportion of points strictly less than the given point. The "mean" method is the average of the latter two.

As Kevin H. Lin noted, calling percentileofscore in a loop is inefficient since it has to recompute the ranks on every pass. However, these percentile calculations can be easily replicated using different ranking methods provided by scipy.stats.rankdata, letting you calculate all the percentiles at once:

>>> from scipy import stats
>>> stats.rankdata(x, "average")/len(x)
array([ 0.3,  0.3,  0.7,  0.7,  1. ])
>>> stats.rankdata(x, 'max')/len(x)
array([ 0.4,  0.4,  0.8,  0.8,  1. ])
>>> (stats.rankdata(x, 'min')-1)/len(x)
array([ 0. ,  0. ,  0.4,  0.4,  0.8])

In the last case the ranks are adjusted down by one to make them start from 0 instead of 1. (I've omitted "mean", but it could easily be obtained by averaging the results of the latter two methods.)

I did some timings. With small data such as that in your example, using rankdata is somewhat slower than Kevin H. Lin's solution (presumably due to the overhead scipy incurs in converting things to numpy arrays under the hood) but faster than calling percentileofscore in a loop as in reptilicus's answer:

In [11]: %timeit [stats.percentileofscore(x, i) for i in x]
1000 loops, best of 3: 414 μs per loop

In [12]: %timeit list_to_percentiles(x)
100000 loops, best of 3: 11.1 μs per loop

In [13]: %timeit stats.rankdata(x, "average")/len(x)
10000 loops, best of 3: 39.3 μs per loop

With a large dataset, however, the performance advantage of numpy takes effect and using rankdata is 10 times faster than Kevin's list_to_percentiles:

In [18]: x = np.random.randint(0, 10000, 1000)

In [19]: %timeit [stats.percentileofscore(x, i) for i in x]
1 loops, best of 3: 437 ms per loop

In [20]: %timeit list_to_percentiles(x)
100 loops, best of 3: 1.08 ms per loop

In [21]: %timeit stats.rankdata(x, "average")/len(x)
10000 loops, best of 3: 102 μs per loop

This advantage will only become more pronounced on larger and larger datasets.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...