我收集了一个用于二进制文本分类的小数据集,我的目标是使用Convolutional Neural Networks for Sentence Classification提出的方法训练模型
我通过使用torch.util.data.Dataset. 基本上我的数据集中的每个样本my_data都是这样的(例如):
{"words":[0,1,2,3,4],"label":1},
{"words":[4,9,20,30,4,2,3,4,1],"label":0}
接下来我看了一下用 pytorch 编写自定义数据加载器:使用:
dataloader = DataLoader(my_data, batch_size=2,
shuffle=False, num_workers=4)
我怀疑枚举一个批次会产生以下结果:
{"words":[[0,1,2,3,4],[4,9,20,30,4,2,3,4,1]],"labels":[1,0]}
然而它更像是这样的:
{"words":[[0,4],[1,9],[2,20],[3,30],[4,4]],"label":[1,0]}
我想这与它们的大小不相等有关。它们是否需要相同的大小,如果需要,我该如何实现?对于了解这篇论文的人来说,你的训练数据是什么样的?
编辑:
class CustomDataset(Dataset):
def __init__(self, path_to_file, max_size=10, transform=None):
with open(path_to_file) as f:
self.data = json.load(f)
self.transform = transform
self.vocab = self.build_vocab(self.data)
self.word2idx, self.idx2word = self.word2index(self.vocab)
def get_vocab(self):
return self.vocab
def get_word2idx(self):
return self.word2idx, self.idx2word
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
inputs_ = word_tokenize(self.data[idx][0])
inputs_ = [w for w in inputs_ if w not in stopwords]
inputs_ = [w for w in inputs_ if w not in punctuation]
inputs_ = [self.word2idx[w] for w in inputs_] # convert words to index
label = {"positive": 1,"negative": 0}
label_ = label[self.data[idx][1]] #convert label to 0|1
sample = {"words": inputs_, "label": label_}
if self.transform:
sample = self.transform(sample)
return sample
www说
相关分类