多种条件提前停止

我正在为推荐系统(项目推荐)进行多类分类,目前正在使用sparse_categorical_crossentropy损失来训练我的网络。EarlyStopping因此,通过监控我的验证损失来执行是合理的,val_loss如下所示:

tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)

其按预期工作。然而,网络(推荐系统)的性能是通过 Average-Precision-at-10 来衡量的,并在训练期间作为指标进行跟踪,如average_precision_at_k10。因此,我还可以使用此指标执行提前停止:

tf.keras.callbacks.EarlyStopping(monitor='average_precision_at_k10', patience=10)

这也按预期工作。

我的问题: 有时验证损失会增加,而 10 处的平均精度会提高,反之亦然。因此,当且仅当两者都恶化时,我需要监测两者,并尽早停止。我想做的事:

tf.keras.callbacks.EarlyStopping(monitor=['val_loss', 'average_precision_at_k10'], patience=10)

这显然不起作用。有什么想法可以做到这一点吗?


慕妹3146593
浏览 53回答 4
4回答

守着星空守着你

我成功创建了自己的自定义 EarlyStopping 回调,并认为我将其发布在这里,以防其他人想要实现类似的东西。如果验证损失和10 时的 平均精度对于epoch 数没有改善,则执行早期停止。patienceclass CustomEarlyStopping(keras.callbacks.Callback):    def __init__(self, patience=0):        super(CustomEarlyStopping, self).__init__()        self.patience = patience        self.best_weights = None            def on_train_begin(self, logs=None):        # The number of epoch it has waited when loss is no longer minimum.        self.wait = 0        # The epoch the training stops at.        self.stopped_epoch = 0        # Initialize the best as infinity.        self.best_v_loss = np.Inf        self.best_map10 = 0    def on_epoch_end(self, epoch, logs=None):         v_loss=logs.get('val_loss')        map10=logs.get('val_average_precision_at_k10')        # If BOTH the validation loss AND map10 does not improve for 'patience' epochs, stop training early.        if np.less(v_loss, self.best_v_loss) and np.greater(map10, self.best_map10):            self.best_v_loss = v_loss            self.best_map10 = map10            self.wait = 0            # Record the best weights if current results is better (less).            self.best_weights = self.model.get_weights()        else:            self.wait += 1            if self.wait >= self.patience:                self.stopped_epoch = epoch                self.model.stop_training = True                print("Restoring model weights from the end of the best epoch.")                self.model.set_weights(self.best_weights)                    def on_train_end(self, logs=None):        if self.stopped_epoch > 0:            print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))然后将其用作:model.fit(    x_train,    y_train,    batch_size=64,    steps_per_epoch=5,    epochs=30,    verbose=0,    callbacks=[CustomEarlyStopping(patience=10)],)

呼如林

您可以通过创建自定义回调来实现此目的。下面是一些代码,说明了您可以在自定义回调中执行哪些操作。我引用的文档显示了许多其他选项。class LRA(keras.callbacks.Callback): # subclass the callback class# create class variables as below. These can be accessed in your code outside the class definition as LRA.my_class_variable, LRA.best_weights    my_class_variable=something  # a class variable    best_weights=model.get_weights() # another  class variable# define an initialization function with parameters you want to feed to the callback    def __init__(self, param1, param2, etc):        super(LRA, self).__init__()        self.param1=param1        self.param2=param2        etc for all parameters        # write any initialization code you need here    def on_epoch_end(self, epoch, logs=None):  # method runs on the end of each epoch        v_loss=logs.get('val_loss')  # example of getting log data at end of epoch the validation loss for this epoch        acc=logs.get('accuracy') # another example of getting log data         LRA.best_weights=model.get_weights() # example of setting class variable value        print(f'Hello epoch {epoch} has just ended') # print a message at the end of every epoch        lr=float(tf.keras.backend.get_value(self.model.optimizer.lr)) # get the current learning rate        if v_loss > self.param1:           new_lr=lr * self.param2           tf.keras.backend.set_value(model.optimizer.lr, new_lr) # set the learning rate in the optimizer        # write whatever code you need

哈士奇WWW

我建议您创建自己的回调。下面我添加了一个监控准确性和损失的解决方案。您可以将 acc 替换为您自己的指标:class CustomCallback(keras.callbacks.Callback):    acc = {}    loss = {}    best_weights = None        def __init__(self, patience=None):        super(CustomCallback, self).__init__()        self.patience = patience        def on_epoch_end(self, epoch, logs=None):        epoch += 1        self.loss[epoch] = logs['loss']        self.acc[epoch] = logs['accuracy']            if self.patience and epoch > self.patience:            # best weight if the current loss is less than epoch-patience loss. Simiarly for acc but when larger            if self.loss[epoch] < self.loss[epoch-self.patience] and self.acc[epoch] > self.acc[epoch-self.patience]:                self.best_weights = self.model.get_weights()            else:                # to stop training                self.model.stop_training = True                # Load the best weights                self.model.set_weights(self.best_weights)        else:            # best weight are the current weights            self.best_weights = self.model.get_weights()请记住,如果您想控制受监控数量的最小变化(又名 min_delta),您必须将其集成到代码中。

芜湖不芜

此时,创建自定义循环并仅使用 if 语句会更简单。例如:def main(epochs=50):&nbsp; &nbsp; for epoch in range(epochs):&nbsp; &nbsp; &nbsp; &nbsp; fit(epoch)&nbsp; &nbsp; &nbsp; &nbsp; if test_acc.result() > .8 and topk_acc.result() > .9:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; print(f'\nEarly stopping. Test acc is above 80% and TopK acc is above 90%.')&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; breakif __name__ == '__main__':&nbsp; &nbsp; main(epochs=100)这是使用此方法的简单自定义训练循环:import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'import tensorflow_datasets as tfdsimport tensorflow as tfdata, info = tfds.load('iris', split='train',&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;as_supervised=True,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;shuffle_files=True,&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;with_info=True)def preprocessing(inputs, targets):&nbsp; &nbsp; scaled = tf.divide(inputs, tf.reduce_max(inputs, axis=0))&nbsp; &nbsp; return scaled, targetsdataset = data.filter(lambda x, y: tf.less_equal(y, 2)).\&nbsp; &nbsp; map(preprocessing).\&nbsp; &nbsp; shuffle(info.splits['train'].num_examples)train_dataset = dataset.take(120).batch(4)test_dataset = dataset.skip(120).take(30).batch(4)model = tf.keras.Sequential([&nbsp; &nbsp; tf.keras.layers.Dense(8, activation='relu'),&nbsp; &nbsp; tf.keras.layers.Dense(16, activation='relu'),&nbsp; &nbsp; tf.keras.layers.Dense(info.features['label'].num_classes, activation='softmax')&nbsp; &nbsp; ])loss_object = tf.losses.SparseCategoricalCrossentropy(from_logits=True)train_loss = tf.metrics.Mean()test_loss = tf.metrics.Mean()train_acc = tf.metrics.SparseCategoricalAccuracy()test_acc = tf.metrics.SparseCategoricalAccuracy()topk_acc = tf.metrics.SparseTopKCategoricalAccuracy(k=2)opt = tf.keras.optimizers.Adam(learning_rate=1e-3)@tf.functiondef train_step(inputs, labels):&nbsp; &nbsp; with tf.GradientTape() as tape:&nbsp; &nbsp; &nbsp; &nbsp; logits = model(inputs)&nbsp; &nbsp; &nbsp; &nbsp; loss = loss_object(labels, logits)&nbsp; &nbsp; gradients = tape.gradient(loss, model.trainable_variables)&nbsp; &nbsp; opt.apply_gradients(zip(gradients, model.trainable_variables))&nbsp; &nbsp; train_loss(loss)&nbsp; &nbsp; train_acc(labels, logits)@tf.functiondef test_step(inputs, labels):&nbsp; &nbsp; logits = model(inputs)&nbsp; &nbsp; loss = loss_object(labels, logits)&nbsp; &nbsp; test_loss.update_state(loss)&nbsp; &nbsp; test_acc.update_state(labels, logits)&nbsp; &nbsp; topk_acc.update_state(labels, logits)def fit(epoch):&nbsp; &nbsp; template = 'Epoch {:>2} Train Loss {:.3f} Test Loss {:.3f} ' \&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;'Train Acc {:.2f} Test Acc {:.2f} Test TopK Acc {:.2f} '&nbsp; &nbsp; train_loss.reset_states()&nbsp; &nbsp; test_loss.reset_states()&nbsp; &nbsp; train_acc.reset_states()&nbsp; &nbsp; test_acc.reset_states()&nbsp; &nbsp; topk_acc.reset_states()&nbsp; &nbsp; for X_train, y_train in train_dataset:&nbsp; &nbsp; &nbsp; &nbsp; train_step(X_train, y_train)&nbsp; &nbsp; for X_test, y_test in test_dataset:&nbsp; &nbsp; &nbsp; &nbsp; test_step(X_test, y_test)&nbsp; &nbsp; print(template.format(&nbsp; &nbsp; &nbsp; &nbsp; epoch + 1,&nbsp; &nbsp; &nbsp; &nbsp; train_loss.result(),&nbsp; &nbsp; &nbsp; &nbsp; test_loss.result(),&nbsp; &nbsp; &nbsp; &nbsp; train_acc.result(),&nbsp; &nbsp; &nbsp; &nbsp; test_acc.result(),&nbsp; &nbsp; &nbsp; &nbsp; topk_acc.result()&nbsp; &nbsp; ))def main(epochs=50):&nbsp; &nbsp; for epoch in range(epochs):&nbsp; &nbsp; &nbsp; &nbsp; fit(epoch)&nbsp; &nbsp; &nbsp; &nbsp; if test_acc.result() > .8 and topk_acc.result() > .9:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; print(f'\nEarly stopping. Test acc is above 80% and TopK acc is above 90%.')&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; breakif __name__ == '__main__':&nbsp; &nbsp; main(epochs=100)
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python