I'm reading data from a database (50k+ rows) where one column is stored as JSON. I want to extract that into a pandas dataframe.
The snippet below works fine but is fairly inefficient and really takes forever when run against the whole db.
Note that not all the items have the same attributes and that the JSON have some nested attributes.
How could I make this faster?
import pandas as pd
import json
df = pd.read_csv('http://pastebin.com/raw/7L86m9R2',
header=None, index_col=0, names=['data'])
df.data.apply(json.loads)
.apply(pd.io.json.json_normalize)
.pipe(lambda x: pd.concat(x.values))
###this returns a dataframe where each JSON key is a column
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…