如何将 Eager Execution 中的模型转换为静态图并保存在 .pb 文件中?

想象一下,我有模型 (tf.keras.Model):


class ContextExtractor(tf.keras.Model):

    def __init__(self):

        super().__init__()

        self.model = self.__get_model()


    def call(self, x, training=False, **kwargs):

        features = self.model(x, training=training)

        return features


    def __get_model(self):

        return self.__get_small_conv()


    def __get_small_conv(self):

        model = tf.keras.Sequential()

        model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same'))

        model.add(layers.LeakyReLU(alpha=0.2))


        model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same'))

        model.add(layers.LeakyReLU(alpha=0.2))


        model.add(layers.Conv2D(64, (3, 3), strides=(2, 2), padding='same'))

        model.add(layers.LeakyReLU(alpha=0.2))


        model.add(layers.Conv2D(128, (3, 3), strides=(2, 2), padding='same'))

        model.add(layers.LeakyReLU(alpha=0.2))


        model.add(layers.Conv2D(256, (3, 3), strides=(2, 2), padding='same'))

        model.add(layers.LeakyReLU(alpha=0.2))



        model.add(layers.GlobalAveragePooling2D())


        return model

我训练它并使用如下方式保存它:


   checkpoint = tf.train.Checkpoint(

                model=self.model,

                global_step=tf.train.get_or_create_global_step())

   checkpoint.save(weights_path / f'epoch_{epoch}')

这意味着我有两个保存的文件:epoch_10-2.index和epoch_10-2.data-00000-of-00001


现在我想部署我的模型。我想获取 .pb 文件。我怎么才能得到它?我想我需要在图形模式下打开我的模型,加载我的权重并将其保存在 pb.file 中。事实上怎么做?


ITMISS
浏览 280回答 2
2回答

holdtom

每个正在寻找我的问题的答案的人,请看下面。注意:我想您已经将模型保存在其中checkpoint_dir并希望以图形模式获取此模型,以便您可以将其另存为.pb文件。model = ContextExtractor()predictions = model(images, training=False)checkpoint = tf.train.Checkpoint(model=model, global_step=tf.train.get_or_create_global_step())status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))status.assert_consumed()with tf.Session() as sess:    status.initialize_or_restore(sess) # this is the main line for loading    # Actually, I don't know it is necessary to pass one batch for creating graph or not       img_batch = get_image(...)     ans = sess.run(predictions, feed_dict={images: img_batch})    frozen_graph = freeze_session(sess, output_names=[out.op.name for out in model.outputs])# save your modeltf.train.write_graph(frozen_graph, "where/to/save", "tf_model.pb", as_text=False)

蝴蝶不菲

你应该得到会话:tf.keras.backend.get_session()然后冻结模型,例如这里做的https://www.dlology.com/blog/how-to-convert-trained-keras-model-to-tensorflow-and-make-prediction/def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):    """    Freezes the state of a session into a pruned computation graph.    Creates a new computation graph where variable nodes are replaced by    constants taking their current value in the session. The new graph will be    pruned so subgraphs that are not necessary to compute the requested    outputs are removed.    @param session The TensorFlow session to be frozen.    @param keep_var_names A list of variable names that should not be frozen,                          or None to freeze all the variables in the graph.    @param output_names Names of the relevant graph outputs.    @param clear_devices Remove the device directives from the graph for better portability.    @return The frozen graph definition.    """    from tensorflow.python.framework.graph_util import convert_variables_to_constants    graph = session.graph    with graph.as_default():        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))        output_names = output_names or []        output_names += [v.op.name for v in tf.global_variables()]        # Graph -> GraphDef ProtoBuf        input_graph_def = graph.as_graph_def()        if clear_devices:            for node in input_graph_def.node:                node.device = ""        frozen_graph = convert_variables_to_constants(session, input_graph_def,                                                      output_names, freeze_var_names)        return frozen_graphfrozen_graph = freeze_session(K.get_session(),                              output_names=[out.op.name for out in model.outputs])然后将模型另存为.pb(也显示在链接中):tf.train.write_graph(frozen_graph, "model", "tf_model.pb", as_text=False)如果这太麻烦,请尝试将 keras 模型另存为.h5(HDF5 类型文件),然后按照提供的链接中的说明进行操作。从张量流文档:编写兼容代码 为 Eager Execution 编写的相同代码也将在图执行期间构建图。为此,只需在未启用 Eager Execution 的新 Python 会话中运行相同的代码即可。同样来自同一页面:为了保存和加载模型,tf.train.Checkpoint 存储对象的内部状态,而不需要隐藏变量。要记录模型、优化器和全局步骤的状态,请将它们传递给 tf.train.Checkpoint:checkpoint_dir = tempfile.mkdtemp()checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")root = tf.train.Checkpoint(optimizer=optimizer,                           model=model,                           optimizer_step=tf.train.get_or_create_global_step())root.save(checkpoint_prefix)root.restore(tf.train.latest_checkpoint(checkpoint_dir))我向您推荐本页的最后一部分:https : //www.tensorflow.org/guide/eager希望这可以帮助。
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python