Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
441 views
in Technique[技术] by (71.8m points)

r - how to remove partial duplicates from a data frame?

Data I'm importing describes numeric measurements taken at various locations for more or less evenly spread timestamps. sometimes this "evenly spread" is not really true and I have to discard some of the values, it's not that important which one, as long as I have one value for each timestamp for each location.

what I do with the data? I add it to a result data.frame. There I have a timestamp column and the values in the timestamp column, they are definitely evenly spaced according to the step.

timestamps <- ceiling(as.numeric((timestamps-epoch)*24*60/step))*step*60 + epoch
result[result$timestamp %in% timestamps, columnName] <- values

This does NOT work when I have timestamps that fall in the same time step. This is an example:

> data.frame(ts=timestamps, v=values)
                   ts         v
1 2009-09-30 10:00:00 -2.081609
2 2009-09-30 10:04:18 -2.079778
3 2009-09-30 10:07:47 -2.113531
4 2009-09-30 10:09:01 -2.124716
5 2009-09-30 10:15:00 -2.102117
6 2009-09-30 10:27:56 -2.093542
7 2009-09-30 10:30:00 -2.092626
8 2009-09-30 10:45:00 -2.086339
9 2009-09-30 11:00:00 -2.080144
> data.frame(ts=ceiling(as.numeric((timestamps-epoch)*24*60/step))*step*60+epoch,
+ v=values)
                   ts         v
1 2009-09-30 10:00:00 -2.081609
2 2009-09-30 10:15:00 -2.079778
3 2009-09-30 10:15:00 -2.113531
4 2009-09-30 10:15:00 -2.124716
5 2009-09-30 10:15:00 -2.102117
6 2009-09-30 10:30:00 -2.093542
7 2009-09-30 10:30:00 -2.092626
8 2009-09-30 10:45:00 -2.086339
9 2009-09-30 11:00:00 -2.080144

in Python I would (mis)use a dictionary to achieve what I need:

dict(zip(timestamps, values)).items()

returns a list of pairs where the first coordinate is unique.

in R I don't know how to do it in a compact and efficient way.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I would use subset combined with duplicated to filter non-unique timestamps in the second data frame:

R> df_ <- read.table(textConnection('
                     ts         v
1 "2009-09-30 10:00:00" -2.081609
2 "2009-09-30 10:15:00" -2.079778
3 "2009-09-30 10:15:00" -2.113531
4 "2009-09-30 10:15:00" -2.124716
5 "2009-09-30 10:15:00" -2.102117
6 "2009-09-30 10:30:00" -2.093542
7 "2009-09-30 10:30:00" -2.092626
8 "2009-09-30 10:45:00" -2.086339
9 "2009-09-30 11:00:00" -2.080144
'), as.is=TRUE, header=TRUE)

R> subset(df_, !duplicated(ts))
                   ts      v
1 2009-09-30 10:00:00 -2.082
2 2009-09-30 10:15:00 -2.080
6 2009-09-30 10:30:00 -2.094
8 2009-09-30 10:45:00 -2.086
9 2009-09-30 11:00:00 -2.080

Update: To select a specific value you can use aggregate

aggregate(df_$v, by=list(df_$ts), function(x) x[1])  # first value
aggregate(df_$v, by=list(df_$ts), function(x) tail(x, n=1))  # last value
aggregate(df_$v, by=list(df_$ts), function(x) max(x))  # max value

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...