I have some problem with tokenization, the assignment is to separate a sentence into words.
This is what I have done at the moment.
def tokenize(s):
d = []
start = 0
while start < len(s):
while start < len(s) and s[start].isspace():
start = start+1
end = start
while end < len(s) and not s[end].isspace():
end = end+1
d = d + [s[start:end]]
start = end
print(d)
Running the program:
>>> tokenize("He was walking, it was fun")
['He', 'was', 'walking,', 'it', 'was', 'fun']
This works fine, but the problem is as you can see that my program will include the comma in the word walking. I want to separate the comma (and other "symbols") as an individual "word".
Such as:
['He', 'was', 'walking', ',', 'it', 'was', 'fun']
How can I modify my code to fix this?
Thanks in advance!
question from:
https://stackoverflow.com/questions/65941369/how-to-separate-comma-from-word-tokenization 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…