猿问

如何保存torchtext数据集?

我正在处理文本并使用torchtext.data.Dataset. 创建数据集需要相当长的时间。对于只是运行程序,这仍然是可以接受的。但我想调试神经网络的火炬代码。如果 Python 在调试模式下启动,数据集创建大约需要 20 分钟 (!!)。这只是为了获得一个工作环境,我可以在其中调试神经网络代码。


我想保存数据集,例如使用泡菜。此示例代码取自此处,但我删除了此示例不需要的所有内容:


from torchtext import data

from fastai.nlp import *


PATH = 'data/aclImdb/'


TRN_PATH = 'train/all/'

VAL_PATH = 'test/all/'

TRN = f'{PATH}{TRN_PATH}'

VAL = f'{PATH}{VAL_PATH}'


TEXT = data.Field(lower=True, tokenize="spacy")


bs = 64;

bptt = 70


FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)

md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)


with open("md.pkl", "wb") as file:

    pickle.dump(md, file)

要运行代码,您需要 aclImdb 数据集,可以从这里下载。将其解压缩data/到此代码片段旁边的文件夹中。代码在最后一行产生错误,其中使用了pickle:


Traceback (most recent call last):

  File "/home/lhk/programming/fastai_sandbox/lesson4-imdb2.py", line 27, in <module>

    pickle.dump(md, file)

TypeError: 'generator' object is not callable

fastai 的样品经常使用莳萝代替泡菜。但这对我也不起作用。


胡说叔叔
浏览 427回答 3
3回答

PIPIONE

我为自己想出了以下功能:import dillfrom pathlib import Pathimport torchfrom torchtext.data import Datasetdef save_dataset(dataset, path):&nbsp; &nbsp; if not isinstance(path, Path):&nbsp; &nbsp; &nbsp; &nbsp; path = Path(path)&nbsp; &nbsp; path.mkdir(parents=True, exist_ok=True)&nbsp; &nbsp; torch.save(dataset.examples, path/"examples.pkl", pickle_module=dill)&nbsp; &nbsp; torch.save(dataset.fields, path/"fields.pkl", pickle_module=dill)def load_dataset(path):&nbsp; &nbsp; if not isinstance(path, Path):&nbsp; &nbsp; &nbsp; &nbsp; path = Path(path)&nbsp; &nbsp; examples = torch.load(path/"examples.pkl", pickle_module=dill)&nbsp; &nbsp; fields = torch.load(path/"fields.pkl", pickle_module=dill)&nbsp; &nbsp; return Dataset(examples, fields)并不是说实际对象可能会有所不同,例如,如果保存TabularDataset,则load_dataset返回 class 的实例Dataset。这不太可能影响数据管道,但可能需要额外努力进行测试。在自定义标记器的情况下,它也应该是可序列化的(例如,没有 lambda 函数等)。

慕哥6287543

您始终可以使用 pickle 转储对象,但请记住,模块不会处理转储字典或字段对象列表,因此最好先尝试分解列表将 DataSet 对象存储到 pickle 文件以便以后轻松加载def save_to_pickle(dataSetObject,PATH):&nbsp; &nbsp; with open(PATH,'wb') as output:&nbsp; &nbsp; &nbsp; &nbsp; for i in dataSetObject:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; pickle.dump(vars(i), output, pickle.HIGHEST_PROTOCOL)最艰难的事情还没有到来,是的,加载泡菜文件....;)首先,尝试查找所有字段名称和字段属性,然后进行kill将 pickle 文件加载到 DataSetObject 中def load_pickle(PATH, FIELDNAMES, FIELD):&nbsp; &nbsp; dataList = []&nbsp; &nbsp; with open(PATH, "rb") as input_file:&nbsp; &nbsp; &nbsp; &nbsp; while True:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; try:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Taking the dictionary instance as the input Instance&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; inputInstance = pickle.load(input_file)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # plugging it into the list&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; dataInstance =&nbsp; [inputInstance[FIELDNAMES[0]],inputInstance[FIELDNAMES[1]]]&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # Finally creating an example objects list&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; dataList.append(Example().fromlist(dataInstance,fields=FIELD))&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; except EOFError:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; break&nbsp; &nbsp; # At last creating a data Set Object&nbsp; &nbsp; exampleListObject = Dataset(dataList, fields=data_fields)&nbsp; &nbsp; return exampleListObject&nbsp;这个 hackish 解决方案在我的情况下有效,希望你会发现它对你的情况也有用。顺便说一句,欢迎任何建议:)。
随时随地看视频慕课网APP

相关分类

Python
我要回答