不相容的形状:[1020,1,1] 与 [1019,1,1] - 张量流

当我将神经网络的输出设置为1时出现问题,它在数据数组中出现了问题,指责哪个形状更大,如果我使用,一切都可以正常工作!periods = len(valuesAnalisys) - 1


时期:


periods = 1

返回:


Imcompatible shapes: [1020,1,1] vs. [1019,1,1]

神经网络:


datecollect = [x[0] for x in dataSet]

servers = [x[1] for x in dataSet]

valuesAnalisys = [float(x[2]) for x in dataSet]


base = np.array(valuesAnalisys)


periods = 1

future_forecast = 1


X = base[0:(len(base) - (len(base) % periods))]

X_batches = X.reshape(-1, periods, 1)


y = base[1:(len(base) - (len(base) % periods)) + future_forecast]

y_batches = y.reshape(-1, periods, 1)


X_test = base[-(periods + future_forecast):]

X_test = X_test[:periods]

X_test = X_test.reshape(-1, periods, 1)

y_test = base[-(periods):]

y_test = y_test.reshape(-1, periods, 1)


tf.reset_default_graph()


appetizer = 1

hidden_neurons = 100

exit_neurons = 1


xph = tf.placeholder(tf.float32, [None, periods, appetizer])

yph = tf.placeholder(tf.float32, [None, periods, exit_neurons])


cell = tf.contrib.rnn.BasicRNNCell(num_units = hidden_neurons, activation = tf.nn.relu)


cell = tf.contrib.rnn.OutputProjectionWrapper(cell, output_size = 1)


exit_rnn, _ = tf.nn.dynamic_rnn(cell, xph, dtype = tf.float32)

calculateError = tf.losses.mean_squared_error(labels = yph, predictions = exit_rnn)

otimizador = tf.train.AdamOptimizer(learning_rate = 0.001)

training = otimizador.minimize(calculateError)


with tf.Session() as sess:

    sess.run(tf.global_variables_initializer())


    for epoch in range(2000):

        _, cost = sess.run([training, calculateError], feed_dict = {xph: X_batches, yph: y_batches})

        if epoch % 100 == 0:

            print("[INFO] Epoch: {} - Level Error: {}".format(epoch,cost))


    forecast = sess.run(exit_rnn, feed_dict = {xph: X_test})


y_test.shape

y_test2 = np.ravel(y_test)


final_forecast = np.ravel(forecast)


mae = mean_absolute_error(y_test2, final_forecast)


for (host, forecast, date) in list(zip(servers, final_forecast, datecollect)):

    send.postForecastMemory(host, forecast, cost, date)

慕田峪7331174
浏览 128回答 1
1回答

月关宝盒

罪魁祸首似乎是RNN细胞中固定的时间维度。xph = tf.placeholder(tf.float32, [None, periods, appetizer])yph = tf.placeholder(tf.float32, [None, periods, exit_neurons])cell = tf.contrib.rnn.BasicRNNCell(num_units = hidden_neurons, activation = tf.nn.relu)在这里,在 xph 和 yph 中,您都已将时间维度指定为周期。因此,如果您有更长或更短的信号,则会出现错误。我无法推断模型层的确切尺寸,因为您没有指定输入形状或模型摘要。因此,使用占位符数字。有两种可能的修复方法。不要使用固定的时间维度 = 周期,而应使用 None。xph = tf.placeholder(tf.float32, [None, None, appetizer])yph = tf.placeholder(tf.float32, [None, None, exit_neurons])但是,缺点是每个批次中必须具有相同长度的信号,或者您可以简单地使用批量大小 = 1 进行训练,而不必担心时间长度。使用截断/填充来解决长度问题。只需将信号传递到预处理函数即可添加/删除额外的时间点。import numpy as npdef pre_process(x, fixed_len = 1000): # x.shape -> (100, 1000, 1)    if x.shape[1] >= fixed_len:       return x[:,:fixed_len,:]    else:       z_ph = np.zeros((x.shape[0], fixed_len, x.shape[2]))       z_ph[:,:x.shape[1],:] = x       return z_phX_batches = pre_process(X_batches, YOU_CHOOSE_THIS_LENGTH) # based on the length of your dataX_test = pre_process(X_test, YOU_CHOOSE_THIS_LENGTH)
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python