Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
424 views
in Technique[技术] by (71.8m points)

python - How to avoid NLTK's sentence tokenizer splitting on abbreviations?

I'm currently using NLTK for language processing, but I have encountered a problem of sentence tokenizing.

Here's the problem: Assume I have a sentence: "Fig. 2 shows a U.S.A. map." When I use punkt tokenizer, my code looks like this:

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
punkt_param = PunktParameters()
abbreviation = ['U.S.A', 'fig']
punkt_param.abbrev_types = set(abbreviation)
tokenizer = PunktSentenceTokenizer(punkt_param)
tokenizer.tokenize('Fig. 2 shows a U.S.A. map.')

It returns this:

['Fig. 2 shows a U.S.A.', 'map.']

The tokenizer can't detect the abbreviation "U.S.A.", but it worked on "fig". Now when I use the default tokenizer NLTK provides:

import nltk
nltk.tokenize.sent_tokenize('Fig. 2 shows a U.S.A. map.')

This time I get:

['Fig.', '2 shows a U.S.A. map.']

It recognizes the more common "U.S.A." but fails to see "fig"!

How can I combine these two methods? I want to use default abbreviation choices as well as adding my own abbreviations.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I think lower case for u.s.a in abbreviations list will work fine for you Try this,

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
punkt_param = PunktParameters()
abbreviation = ['u.s.a', 'fig']
punkt_param.abbrev_types = set(abbreviation)
tokenizer = PunktSentenceTokenizer(punkt_param)
tokenizer.tokenize('Fig. 2 shows a U.S.A. map.')

It returns this to me:

['Fig. 2 shows a U.S.A. map.']

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...