My Node & Python backend is running just fine, but I now encountered an issue where if a JSON I'm sending from Python back no Node is too long, it gets split into two and my JSON.parse at the Node side fails.
How should I fix this? For example, the first batch clips at
... [1137.6962355826706, -100.78015825640887], [773.3834338399517, -198
and the second one has the remaining few entries
.201506231888], [-87276.575065248, -60597.8827676457], [793.1850250453127,
-192.1674702207991], [1139.4465453979683, -100.56741252031816],
[780.498416769341, -196.04064849430705]]}
Do I have to create some logic on the Node side for long JSONs or is this some sort of a buffering issue I'm having on my Python side that I can overcome with proper settings? Here's all I'm doing on the python side:
outPoints, _ = cv2.projectPoints(inPoints, np.asarray(rvec),
np.asarray(tvec), np.asarray(camera_matrix), np.asarray(dist_coeffs))
# flatten the output to get rid of double brackets per result before JSONifying
flattened = [val for sublist in outPoints for val in sublist]
print(json.dumps({'testdata':np.asarray(flattened).tolist()}))
sys.stdout.flush()
And on the Node side:
// Handle python data from print() function
pythonProcess.stdout.on('data', function (data){
try {
// If JSON handle the data
console.log(JSON.parse(data.toString()));
} catch (e) {
// Otherwise treat as a log entry
console.log(data.toString());
}
});
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…