What you're seeing is a very common problem in traditional statistical natural language processing (NLP). In short, the data you are using the tagger on doesn't look like the data it was trained on. NLTK doesn't document the details, but as far as I know the default tagger is trained on Wall Street Journal articles, the Brown Corpus, or some combination of the two. These corpora contain very few imperatives, so when you give it data with imperatives it doesn't do the right thing.
A good long-term solution would be to correct the tags for a large corpus of recipes and train on the corrected data, that way you solve the problem of mismatch between the training and testing data. This is, however, a huge amount of work. Ideally, a corpus with a lot of imperatives would already exist; my research group has looked into this and we have not found a suitable one, although we are in the process of producing one.
A much simpler solution that I've been using on a recent project that required imperatives to be understood correctly is to simply note what the imperatives are that you want, and force the tags for those words to be correct.
So in the example below, I made a dictionary saying that "combine" should be treated as a verb, and then used a list comprehension to change the tags.
tagged_words = [('combine', 'NN'), ('1', 'CD'), ('1/2', 'CD'), ('cups', 'NNS'), ('flour', 'VBD')]
force_tags = {'combine': 'VB'}
new_tagged_words = [(word, force_tags.get(word, tag)) for word, tag in tagged_words]
The contents of new_tagged_words now has the original tags except changed wherever there was an entry in force_tags.
>>> new_tagged_words
[('combine', 'VB'), ('1', 'CD'), ('1/2', 'CD'), ('cups', 'NNS'), ('flour', 'VBD')]
This solution does require you to say what the words you want to force to verbs are. This is far from ideal, but there isn't a better general solution.