我使用了model.fit()几次,每次负责训练一个层块,其中其他层被冻结
代码
# create the base pre-trained model
base_model = efn.EfficientNetB0(input_tensor=input_tensor,weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(x.shape[1], activation='relu',name='first_dense')(x)
x=Dropout(0.5)(x)
x = Dense(x.shape[1], activation='relu',name='output')(x)
x=Dropout(0.5)(x)
no_classes=10
predictions = Dense(no_classes, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional layers
for layer in base_model.layers:
layer.trainable = False
#FIRST COMPILE
model.compile(optimizer='Adam', loss=loss_function,
metrics=['accuracy'])
#FIRST FIT
model.fit(features[train], labels[train],
batch_size=batch_size,
epochs=top_epoch,
verbose=verbosity,
validation_split=validation_split)
# Generate generalization metrics
scores = model.evaluate(features[test], labels[test], verbose=1)
print(scores)
#Let all layers be trainable
for layer in model.layers:
layer.trainable = True
from tensorflow.keras.optimizers import SGD
奇怪的是,在第二次拟合中,第一个时期的准确率比第一次拟合的最后一个时期的准确率低得多。
结果
Epoch 40/40 6286/6286 [================================] - 14s 2ms/样本 - 损失:0.2370 - 准确度:0.9211 - val_loss:1.3579 - val_accuracy:0.6762 874/874 [================================] - 2s 2ms/样本- 损失:0.4122 - 准确度:0.8764
在 6286 个样本上进行训练,在 1572 个样本上进行验证 Epoch 1/40 6286/6286 [================================] - 60 秒9ms/样本 - 损失:5.9343 - 准确度:0.5655 - val_loss:2.4981 - val_accuracy:0.5115
我认为第二次拟合的权重不是从第一次拟合中获取的
白猪掌柜的
POPMUISE
相关分类