幕布斯6054654
最简单的方法是使用 2 种 LSTM。准备玩具数据集xi = [# Input features at timestep 1[1, 48, 91, 0],# Input features at timestep 2[20, 5, 17, 32],# Input features at timestep 3[12, 18, 0, 0],# Input features at timestep 4[0, 0, 0, 0],# Input features at timestep 5[0, 0, 0, 0]]yi = 1x = torch.tensor([xi, xi])y = torch.tensor([yi, yi])print(x.shape)# torch.Size([2, 5, 4])print(y.shape)# torch.Size([2])然后,x是输入的批次。这里batch_size= 2。嵌入输入vocab_size = 1000embed_size = 100hidden_size = 200embed = nn.Embedding(vocab_size, embed_size)# shape [2, 5, 4, 100]x = embed(x)第一个词-LSTM是将每个序列编码成一个向量# convert x into a batch of sequences# Reshape into [2, 20, 100]x = x.view(bs * 5, 4, 100)wlstm = nn.LSTM(embed_size, hidden_size, batch_first=True)# get the only final hidden state of each sequence_, (hn, _) = wlstm(x)# hn shape [1, 10, 200]# get the output of final layerhn = hn[0] # [10, 200]第二个seq-LSTM是将序列编码成单个向量# Reshape hn into [bs, num_seq, hidden_size]hn = hn.view(2, 5, 200)# Pass to another LSTM and get the final state hnslstm = nn.LSTM(hidden_size, hidden_size, batch_first=True)_, (hn, _) = slstm(hn) # [1, 2, 200]# Similarly, get the hidden state of the last layerhn = hn[0] # [2, 200]添加一些分类层pred_linear = nn.Linear(hidden_size, 1)# [2, 1]output = torch.sigmoid(pred_linear(hn))