I remember reading this blog about the fuzzywuzzy library (looking into another question), which can do this:
pip install fuzzywuzzy
You can use its partial_ratio function to "fuzzy match" the strings:
In [11]: from fuzzywuzzy.fuzz import partial_ratio
In [12]: partial_ratio('AAAB', 'the AAAB inc.')
Out[12]: 100
Which seems confident about it being a good match!
In [13]: partial_ratio('AAAB', 'AAPL')
Out[13]: 50
In [14]: partial_ratio('AAAB', 'Google')
Out[14]: 0
We can take the best match in the actual company list (assuming you have it):
In [15]: co_list = ['AAAB', 'AAPL', 'GOOG']
In [16]: df.Company.apply(lambda mistyped_co: max(co_list,
key=lambda co: partial_ratio(mistyped_co, co)))
Out[16]:
0 AAAB
1 AAAB
2 AAAB
3 AAAB
Name: Company, dtype: object
I strongly suspect there is something in scikit learn or a numpy library to do this more efficiently on large datasets... but this should get the job done.
If you don't have the company list you'll probably have to do something more clevererer...
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…