Is there any more efficient approach?
Yes.
The first step would be to transform the original data into a set of "at time = T, bandwidth changes by N" events, sorted in chronological order, and simplified by merging events that happen at the same time.
For your example, if the input is [ [0,2,10], [1,2,10], [2,1,20] ]
then it would be broken up into:
** [ [0,2,10] **
At 0, bandwidth += 10
At 2, bandwidth += -10
** [1,2,10] **
At 1, bandwidth += 10
At 3, bandwidth += -10
** [2,1,20] **
At 2, bandwidth += 20
At 3, bandwidth += -20
..then sorted and simplified (merging events that happen at the same time - e.g. bandwidth += -10, bandwidth += 20
becomes a single bandwidth += 10
) to get:
At 0, bandwidth += 10
At 1, bandwidth += 10
At 2, bandwidth += 10
At 3, bandwidth += -30
From there it's a simple matter of generating the final array from the sorted list:
10, 20, 30, 0
To understand why this is more efficient, imagine what happens if the time is tracked with higher precision (e.g. maybe milliseconds instead of seconds) and the input is [ [0,2000,10], [1000,2000,10], [2000,1000,20] ]
instead. For my approach generating the final array would become an outer loop with 4 iterations and an inner loop that can be a highly optimized memset()
(C) or rep stosd
(80x86 assembly) or np.full()
(Python with NumPy); and for your approach the outer loop needs 30000 iterations where the inner loop wastes a huge amount of time repeating a linear search that (for most iterations of the outer loop) finds the same answer as the previous iteration of the outer loop.