Keras LSTM 输入和输出维度问题

我正在尝试为多步预测创建 LSTM 模型。现在我正在测试模型网络设置,但发现它在设置上存在尺寸问题。


这是我的测试数据集:


length = 100

df = pd.DataFrame()

df['x1'] = [i/float(length) for i in range(length)]

df['x2'] = [i**2 for i in range(length)]

df['y'] = df['x1'] + df['x2'] 


x_value = df.drop(columns = 'y').values

y_value = df['y'].values.reshape(-1,1)

这是我的 t 窗口数据构建函数:


def build_data(x_value, y_value ,n_input, n_output):

    X, Y = list(), list()

    in_start = 0

    data_len = len(x_value)

    # step over the entire history one time step at a time

    for _ in range(data_len):

        # define the end of the input sequence

        in_end = in_start + n_input

        out_end = in_end + n_output

        if out_end <= data_len:

            x_input = x_value[in_start:in_end] # e.g. t0-t3

            X.append(x_input)

            y_output = y_value[in_end:out_end] # e.g. t4-t5

            Y.append(y_output)

        # move along one time step

        in_start += 1

    return np.array(X), np.array(Y)            


X, Y = build_data(x_value, y_value, 1, 2)

X 和 Y 的形状


X.shape

### (98, 1, 2)

Y.shape

### (98, 2, 1)

对于模型部分,


verbose, epochs, batch_size = 1, 20, 16

n_neurons =  100

n_inputs, n_features  = X.shape[1], X.shape[2]

n_outputs = Y.shape[1]


model = Sequential()

model.add(LSTM(n_neurons, input_shape = (n_inputs, n_features), return_sequences=True))

model.add(TimeDistributed(Dense(1)))

model.compile(loss='mean_squared_error', optimizer='adam')

model.fit(X, Y, epochs=epochs, batch_size=batch_size, verbose=verbose)

它发生了错误: ValueError: Error when checking target: expected time_distributed_41 to have shape (1, 1) but got array with shape (2, 1)


如果使用那X, Y = build_data(x_value, y_value, 2, 2) i.e. input window == output window将是有效的。但我认为它不应该包含这个约束。


我怎样才能克服这个问题?即设置为input window != output window


或我应该设置的任何图层或设置?


牛魔王的故事
浏览 124回答 1
1回答

蝴蝶不菲

您在处理时间维度时遇到形状不匹配...当您尝试预测时间维度为 2 的内容时,时间输入 dim 为 1。因此您的网络中需要一些能够从 1 增加到 2 时间的东西方面。我使用了Upsampling1D图层,下面是一个完整的例子# create fake dataX = np.random.uniform(0,1, (98,1,2))Y = np.random.uniform(0,1, (98,2,1))verbose, epochs, batch_size = 1, 20, 16n_neurons =&nbsp; 100n_inputs, n_features&nbsp; = X.shape[1], X.shape[2]n_outputs = Y.shape[1]model = Sequential()model.add(LSTM(n_neurons, input_shape = (n_inputs, n_features), return_sequences=True))model.add(UpSampling1D(n_outputs))model.add(TimeDistributed(Dense(1)))model.compile(loss='mean_squared_error', optimizer='adam')model.fit(X, Y, epochs=epochs, batch_size=batch_size, verbose=verbose)输入时间暗淡 > 输出时间暗淡,您可以使用 Lambda 或 Pooling 操作(如果维度匹配)。下面是一个使用 Lambda 的例子X = np.random.uniform(0,1, (98,3,2))Y = np.random.uniform(0,1, (98,2,1))verbose, epochs, batch_size = 1, 20, 16n_neurons =&nbsp; 100n_inputs, n_features&nbsp; = X.shape[1], X.shape[2]n_outputs = Y.shape[1]model = Sequential()model.add(LSTM(n_neurons, input_shape = (n_inputs, n_features), return_sequences=True))model.add(Lambda(lambda x: x[:,-n_outputs:,:]))model.add(TimeDistributed(Dense(1)))model.compile(loss='mean_squared_error', optimizer='adam')model.fit(X, Y, epochs=epochs, batch_size=batch_size, verbose=verbose)
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python