使用熊猫读取和处理数据是很普遍的,但存在一些内存问题。我可以读取一个大文件:
import pandas as pd
df = pd.read_csv('mydata.csv.gz', sep=';')
但是,在使用Dask进行相同操作时,出现错误:
import dask.dataframe as dd
df_base = dd.read_csv('CoilsSampleFiltered.csv.gz', sep=';')
追溯:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-7-abc513f2a657> in <module>()
----> 1 df_base = dd.read_csv('CoilsSampleFiltered.csv.gz', sep=';')
~\AppData\Local\Continuum\Anaconda3\lib\site-packages\dask\dataframe\io\csv.py in read(urlpath, blocksize, collection, lineterminator, compression, sample, enforce, assume_missing, storage_options, **kwargs)
424 enforce=enforce, assume_missing=assume_missing,
425 storage_options=storage_options,
--> 426 **kwargs)
427 read.__doc__ = READ_DOC_TEMPLATE.format(reader=reader_name,
428 file_type=file_type)
~\AppData\Local\Continuum\Anaconda3\lib\site-packages\dask\dataframe\io\csv.py in read_pandas(reader, urlpath, blocksize, collection, lineterminator, compression, sample, enforce, assume_missing, storage_options, **kwargs)
324
325 # Use sample to infer dtypes
--> 326 head = reader(BytesIO(b_sample), **kwargs)
327
328 specified_dtypes = kwargs.get('dtype', {})
我正在尝试找出问题所在。该文件由R编写,R默认情况下使用utf-8。
相关分类