This is explained in the section "Understanding How Intervals are computed" in the "Stream Processing with Apache Spark" book published by O'Reilly:
"The window intervals are aligned to the start of the second/minute/hour/day that corresponds to the next" upper time magnitude of the time unit used."
In your case you are always using minutes so the next upper time magnitude is "hour". Therefore it tries to reach the start of the hour. Your cases in more details (forget about the 2 seconds, this is just a delay in the internals):
- 10 minutes: 22:40 + 10 + 10 -> start of the hour
- 8 minutes: 22:34 + 8 + 8 + 8 -> start of the hour
- 5 minutes: 22:35 + 5 + 5 + ... + 5 -> start of the hour
- 14 minutes: 22:46 + 14 -> start of the hour
It is independent of the incoming data and its timestamp/event_time.
As an additional node, the lower window boundary is inclusive whereas the upper one is exclusive. In mathematical notations this would look like [start_time, end_time)
.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…