Usually, when I do NLP analysis, I put one tokenized text into the model, and get some prediction. This is a standard approach.
Now, I would like to get some overall prediction of many texts per one day. I have the TRUE value, I want to predict, per each day. This overall value can not be predicted by analysing individual texts, but only by analysing all the texts from that day.
Because of that, I am thinking about some grouping - for example to join many tokenized texts into one matrix. Something like:
[[0, 5, 2, 15, 80, 55]
[5, 4, 90, 93, 1, 120]
[5, 50, 1, 51, 0, 0]
[...]
[115, 3, 8, 95, 2, 0]],
where 1 row of the matrix is 1 text from some time period.
What do you think about this aggregation? Is it a bad idea?
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…