To transform text to vectors, there are lots of ways to do it, all depending on the use case. The most intuitive one, is the one using the term frequency, i.e , given the vocabulary of the corpus (all the words possible), all text document will be represented as a vector where each entry represents the occurrence of the word in text document.
With this vocabulary :
["machine", "learning", "is", "a", "new", "field", "in", "computer", "science"]
the following text:
["machine", "is", "a", "field", "machine", "is", "is"]
will be transformed as this vector:
[2, 0, 3, 1, 0, 1, 0, 0, 0]
One of the disadvantage of this technique is that there might be lots of 0 in the vector which has the same size as the vocabulary of the corpus. That is why there are others techniques. However the bag of words is often referred to. And there is a slight different version of it using tf.idf
const vocabulary = ["machine", "learning", "is", "a", "new", "field", "in", "computer", "science"]
const text = ["machine", "is", "a", "field", "machine", "is", "is"]
const parse = (t) => vocabulary.map((w, i) => t.reduce((a, b) => b === w ? ++a : a , 0))
console.log(parse(text))
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…