PyTorch BERT TypeError: forward() got an

使用 PyTorch 转换器训练 BERT 模型(按照此处的教程进行操作)。


教程中的以下声明


loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)

导致


TypeError: forward() got an unexpected keyword argument 'labels'

这是完整的错误,


TypeError                                 Traceback (most recent call last)

<ipython-input-53-56aa2f57dcaf> in <module>

     26         optimizer.zero_grad()

     27         # Forward pass

---> 28         loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)

     29         train_loss_set.append(loss.item())

     30         # Backward pass


~/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)

    539             result = self._slow_forward(*input, **kwargs)

    540         else:

--> 541             result = self.forward(*input, **kwargs)

    542         for hook in self._forward_hooks.values():

    543             hook_result = hook(self, input, result)


TypeError: forward() got an unexpected keyword argument 'labels'

我似乎无法弄清楚 forward() 函数期望什么样的参数。


这里有一个类似的问题,但我仍然不明白解决方案是什么。


森栏
浏览 1355回答 1
1回答

湖上湖

据我所知,BertModel 在函数中不带标签forward()。查看forward函数参数。我怀疑您正在尝试为序列分类任务微调 BertModel,并且 API 为BertForSequenceClassification提供了一个类。如您所见,它的 forward() 函数定义:def forward(self, input_ids, attention_mask=None, token_type_ids=None,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; position_ids=None, head_mask=None, labels=None):请注意,forward() 方法返回以下内容。Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:&nbsp; &nbsp; &nbsp; &nbsp; **loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Classification (or regression if config.num_labels==1) loss.&nbsp; &nbsp; &nbsp; &nbsp; **logits**: ``torch.FloatTensor`` of shape ``(batch_size, config.num_labels)``&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Classification (or regression if config.num_labels==1) scores (before SoftMax).&nbsp; &nbsp; &nbsp; &nbsp; **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; of shape ``(batch_size, sequence_length, hidden_size)``:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Hidden-states of the model at the output of each layer plus the initial embedding outputs.&nbsp; &nbsp; &nbsp; &nbsp; **attentions**: (`optional`, returned when ``config.output_attentions=True``)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.&nbsp;希望这可以帮助!
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python