Suppose we have a table of numbers like this (we can assume it is a square table):
20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11
Your job is to calculate the maximum sum of n numbers where n is the number of rows or columns in the table. The catch is each number must come from a unique row and column.
For example, selecting the numbers at (0,0), (1,1), (2,2), (3,3), and (4,4) is acceptable, but (0,0), (0,1), (2,2), (3,3), and (4,4) is not because the first two numbers were pulled from the same row.
My (laughable) solution to this problem is iterating through all the possible permutations of the rows and columns. This works for small grids but, of course, it is incredibly slow as n gets big. It has O(n!) time complexity, if I'm not mistaken (sample Python code below).
I really think this can be solved in better time, but I'm not coming up with anything sufficiently clever.
So my question is, what algorithm should be used to solve this?
If it helps, this problem seems similar to the
knapsack problem.
import itertools
import re
grid = """20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11"""
grid = [[int(x) for x in re.split("s+", line)] for line in grid.split("
")]
possible_column_indexes = itertools.permutations(range(len(grid)))
max_sum = 0
max_positions = []
for column_indexes in possible_column_indexes:
current_sum = 0
current_positions = []
for row, col in enumerate(column_indexes):
current_sum += grid[row][col]
current_positions.append("(%d, %d)" % (row, col))
if current_sum > max_sum:
max_sum = current_sum
max_positions = current_positions
print "Max sum is", max_sum
for position in max_positions:
print position
See Question&Answers more detail:
os