Probably the simplest possible trending "algorithm" I can think of is the n-day moving average. I'm not sure how your data is structured, but say you have something like this:
books = {'Twilight': [500, 555, 580, 577, 523, 533, 556, 593],
'Harry Potter': [650, 647, 653, 642, 633, 621, 625, 613],
'Structure and Interpretation of Computer Programs': [1, 4, 15, 12, 7, 3, 8, 19]
}
A simple moving average just takes the last n
values and averages them:
def moving_av(l, n):
"""Take a list, l, and return the average of its last n elements.
"""
observations = len(l[-n:])
return sum(l[-n:]) / float(observations)
The slice notation simply grabs the tail end of the list, starting from the nth to last variable. A moving average is a fairly standard way to smooth out any noise that a single spike or dip could introduce. The function could be used like so:
book_scores = {}
for book, reader_list in books.iteritems():
book_scores[book] = moving_av(reader_list, 5)
You'll want to play around with the number of days you average over. And if you want to emphasize recent trends you can also look at using something like a weighted moving average.
If you wanted to focus on something that looks less at absolute readership and focuses instead on increases in readership, simply find the percent change in the 30-day moving average and 5-day moving average:
d5_moving_av = moving_av(reader_list, 5)
d30_moving_av = moving_av(reader_list, 30)
book_score = (d5_moving_av - d30_moving_av) / d30_moving_av
With these simple tools you have a fair amount of flexibility in how much you emphasize past trends and how much you want to smooth out (or not smooth out) spikes.