In WordNet, every Lemma has a frequency count that is returned by the method
lemma.count()
, and which is stored in the file nltk_data/corpora/wordnet/cntlist.rev
.
Code example:
from nltk.corpus import wordnet
syns = wordnet.synsets('stack')
for s in syns:
for l in s.lemmas():
print l.name + " " + str(l.count())
Result:
stack 2
batch 0
deal 1
flock 1
good_deal 13
great_deal 10
hatful 0
heap 2
lot 13
mass 14
mess 0
...
However, many counts are zero and there is no information in the source file or in the documentation which corpus was used to create this data. According to the book Speech and Language Processing from Daniel Jurafsky and James H. Martin, the sense frequencies come from the SemCor corpus which is a subset of the already small and outdated Brown Corpus.
So it's probably best to choose the corpus that fits best to the your application and create the data yourself as Christopher suggested.
To make this Python3.x compatible just do:
Code example:
from nltk.corpus import wordnet
syns = wordnet.synsets('stack')
for s in syns:
for l in s.lemmas():
print( l.name() + " " + str(l.count()))
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…