I am trying to speed up my project to count word frequencies. I have 360+ text files, and I need to get the total number of words and the number of times each word from another list of words appears. I know how to do this with a single text file.
>>> import nltk
>>> import os
>>> os.chdir("C:UsersCameronDesktopPDF-to-txt")
>>> filename="1976.03.txt"
>>> textfile=open(filename,"r")
>>> inputString=textfile.read()
>>> word_list=re.split('s+',file(filename).read().lower())
>>> print 'Words in text:', len(word_list)
#spits out number of words in the textfile
>>> word_list.count('inflation')
#spits out number of times 'inflation' occurs in the textfile
>>>word_list.count('jobs')
>>>word_list.count('output')
Its too tedious to get the frequencies of 'inflation', 'jobs', 'output' individual. Can I put these words into a list and find the frequency of all the words in the list at the same time? Basically this with Python.
Example: Instead of this:
>>> word_list.count('inflation')
3
>>> word_list.count('jobs')
5
>>> word_list.count('output')
1
I want to do this (I know this isn't real code, this is what I'm asking for help on):
>>> list1='inflation', 'jobs', 'output'
>>>word_list.count(list1)
'inflation', 'jobs', 'output'
3, 5, 1
My list of words is going to have 10-20 terms, so I need to be able to just point Python toward a list of words to get the counts of. It would also be nice if the output was able to be copy+paste into an excel spreadsheet with the words as columns and frequencies as rows
Example:
inflation, jobs, output
3, 5, 1
And finally, can anyone help automate this for all of the textfiles? I figure I just point Python toward the folder and it can do the above word counting from the new list for each of the 360+ text files. Seems easy enough, but I'm a bit stuck. Any help?
An output like this would be fantastic:
Filename1
inflation, jobs, output
3, 5, 1
Filename2
inflation, jobs, output
7, 2, 4
Filename3
inflation, jobs, output
9, 3, 5
Thanks!
See Question&Answers more detail:
os