我尝试将冻结的 SSD mobilenet v2 模型转换为 TFLITE 格式以供 Android 使用。这是我所有的步骤:
我使用来自模型动物园的ssd_mobilenet_v2_coco_2018_03_29模型使用TF Object Detection API的 train.py 文件重新训练。(好的)
使用同样由 TF Object Detection API 提供的export_inference_graph.py将训练好的 model.ckpt 导出到冻结的模型文件。(好的)
使用 GPU 测试 Python 中的冻结图,并且仅允许使用 CPU。有用。(好的)
缺点是,我尝试使用以下代码:
import tensorflow as tf
tf.enable_eager_execution()
saved_model_dir = 'inference_graph/saved_model/'
converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir,input_arrays=input_arrays,output_arrays=output_arrays,input_shapes={"image_tensor": [1, 832, 832, 3]})
converter.post_training_quantize = True
首先,我尝试不向函数添加输入形状参数,但没有奏效。从那时起我读到你可以在那里写任何无关紧要的东西。
直到这一行的输出:
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:The specified SavedModel has no variables; no checkpoints were restored.
INFO:tensorflow:The given SavedModel MetaGraphDef contains SignatureDefs with the following keys: {'serving_default'}
INFO:tensorflow:input tensors info:
INFO:tensorflow:Tensor's key in saved_model's tensor_map: inputs
INFO:tensorflow: tensor name: image_tensor:0, shape: (-1, -1, -1, 3), type: DT_UINT8
INFO:tensorflow:output tensors info:
INFO:tensorflow:Tensor's key in saved_model's tensor_map: num_detections
INFO:tensorflow: tensor name: num_detections:0, shape: (-1), type: DT_FLOAT
INFO:tensorflow:Tensor's key in saved_model's tensor_map: detection_boxes
INFO:tensorflow: tensor name: detection_boxes:0, shape: (-1, 100, 4), type: DT_FLOAT
INFO:tensorflow:Tensor's key in saved_model's tensor_map: detection_scores
我无法在此处复制控制台输出,因此它超过了 30000 个字符的限制,但您可以在这里看到它:https : //pastebin.com/UyT2x2Vk
请在这一点上提供帮助,我该怎么做才能使其正常工作:(
我的配置:Ubuntu 16.04,Tensorflow-GPU 1.12
白衣染霜花
相关分类