I have an s3 folder with multiple .feather
files, I would like to load these into dask
using python as described here: Load many feather files in a folder into dask. I have tried two ways both give me different errors:
import pandas as pd
import dask
ftr_filenames = [
's3://my-bucket/my-dir/file_1.ftr',
's3://my-bucket/my-dir/file_2.ftr',
.
.
.
's3://my-bucket/my-dir/file_30.ftr'
]
delayed_files = dask.bytes.open_files(ftr_filenames, 'rb')
# ---------------------------option 1 --------------------------------
dfs = [dask.delayed(pd.read_feather)(f) for f in delayed_files]
# ---------------------------option 2 --------------------------------
dfs = [dask.delayed(feather.read_dataframe)(f) for f in delayed_files]
# --------------------------------------------------------------------
df = dask.dataframe.from_delayed(dfs)
# -------------------------- error 1 ------------------------------
# 'S3File' object has no attribute '__fspath__'
# -------------------------- error 2 ------------------------------
# Cannot convert OpenFile to pyarrow.lib.NativeFile
is there another way to read these files from s3 ?, the main purpose here is to circumvent memory issues caused by pd.concat
.
question from:
https://stackoverflow.com/questions/65941923/loading-feather-files-from-s3-with-dask-delayed 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…