返回介绍

Developer Guide

发布于 2025-01-22 23:08:16 字数 10473 浏览 0 评论 0 收藏 0

Using a TensorFlow Lite model in your mobile app requires multiple considerations: you must choose a pre-trained or custom model, convert the model to a TensorFLow Lite format, and finally, integrate the model in your app.

1. Choose a model

Depending on the use case, you can choose one of the popular open-sourced models, such as InceptionV3 or MobileNets, and re-train these models with a custom data set or even build your own custom model.

Use a pre-trained model

MobileNets is a family of mobile-first computer vision models for TensorFlow designed to effectively maximize accuracy, while taking into consideration the restricted resources for on-device or embedded applications. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints for a variety of uses. They can be used for classification, detection, embeddings, and segmentation—similar to other popular large scale models, such as Inception . Google provides 16 pre-trained ImageNet classification checkpoints for MobileNets that can be used in mobile projects of all sizes.

Inception-v3 is an image recognition model that achieves fairly high accuracy recognizing general objects with 1000 classes, for example, "Zebra", "Dalmatian", and "Dishwasher". The model extracts general features from input images using a convolutional neural network and classifies them based on those features with fully-connected and softmax layers.

On Device Smart Reply is an on-device model that provides one-touch replies for incoming text messages by suggesting contextually relevant messages. The model is built specifically for memory constrained devices, such as watches and phones, and has been successfully used in Smart Replies on Android Wear. Currently, this model is Android-specific.

These pre-trained models are available for download

Re-train Inception-V3 or MobileNet for a custom data set

These pre-trained models were trained on the ImageNet data set which contains 1000 predefined classes. If these classes are not sufficient for your use case, the model will need to be re-trained. This technique is called transfer learning and starts with a model that has been already trained on a problem, then retrains the model on a similar problem. Deep learning from scratch can take days, but transfer learning is fairly quick. In order to do this, you need to generate a custom data set labeled with the relevant classes.

The TensorFlow for Poets codelab walks through the re-training process step-by-step. The code supports both floating point and quantized inference.

Train a custom model

tf.GraphDef

TensorFlow Lite currently supports a subset of TensorFlow operators. Refer to the TensorFlow Lite & TensorFlow Compatibility Guide for supported operators and their usage. This set of operators will continue to grow in future Tensorflow Lite releases.

2. Convert the model format

tf.GraphDef

  • tf.GraphDef
  • CheckPoint (.ckpt) —Serialized variables from a TensorFlow graph. Since this does not contain a graph structure, it cannot be interpreted by itself.
  • FrozenGraphDef —A subclass of GraphDef that does not contain variables. A GraphDef can be converted to a FrozenGraphDef by taking a CheckPoint and a GraphDef , and converting each variable into a constant
    using the value retrieved from the CheckPoint.
  • SavedModel —A GraphDef and CheckPoint with a signature that labels input and output arguments to a model. A GraphDef and CheckPoint can be extracted from a SavedModel .
  • TensorFlow Lite model (.tflite) —A serialized FlatBuffer that contains TensorFlow Lite operators and tensors for the TensorFlow Lite interpreter, similar to a FrozenGraphDef .

Freeze Graph

To use the GraphDef .pb file with TensorFlow Lite, you must have checkpoints that contain trained weight parameters. The .pb file only contains the structure of the graph. The process of merging the checkpoint values with the graph structure is called freezing the graph.

You should have a checkpoints folder or download them for a pre-trained model (for example, MobileNets ).

To freeze the graph, use the following command (changing the arguments):

freeze_graph --input_graph=/tmp/mobilenet_v1_224.pb \
  --input_checkpoint=/tmp/checkpoints/mobilenet-10202.ckpt \
  --input_binary=true \
  --output_graph=/tmp/frozen_mobilenet_v1_224.pb \
  --output_node_names=MobileNetV1/Predictions/Reshape_1

The input_binary flag must be enabled so the protobuf is read and written in a binary format. Set the input_graph and input_checkpoint files.

The output_node_names may not be obvious outside of the code that built the model. The easiest way to find them is to visualize the graph, either with TensorBoard or graphviz .

The frozen GraphDef is now ready for conversion to the FlatBuffer format (.tflite) for use on Android or iOS devices. For Android, the Tensorflow Optimizing Converter tool supports both float and quantized models. To convert the frozen GraphDef to the .tflite format:

toco --input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --output_file=/tmp/mobilenet_v1_1.0_224.tflite \
  --inference_type=FLOAT \
  --input_type=FLOAT \
  --input_arrays=input \
  --output_arrays=MobilenetV1/Predictions/Reshape_1 \
  --input_shapes=1,224,224,3

Fixed Point Quantization

It is also possible to use the Tensorflow Optimizing Converter with protobufs from either Python or from the command line (see the toco_from_protos.py example). This allows you to integrate the conversion step into the model design workflow, ensuring the model is easily convertible to a mobile inference graph. For example:

import tensorflow as tf

img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))
val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])
out = tf.identity(val, name="out")

with tf.Session() as sess:
  tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
  open("converteds_model.tflite", "wb").write(tflite_model)

For usage, see the Tensorflow Optimizing Converter command-line examples .

Refer to the Ops compatibility guide for troubleshooting help, and if that doesn't help, please file an issue .

The development repo contains a tool to visualize TensorFlow Lite models after conversion. To build the visualize.py tool:

bazel run tensorflow/contrib/lite/tools:visualize -- model.tflite model_viz.html

This generates an interactive HTML page listing subgraphs, operations, and a graph visualization.

3. Use the TensorFlow Lite model for inference in a mobile app

After completing the prior steps, you should now have a .tflite model file.

Android

Since Android apps are written in Java and the core TensorFlow library is in C++, a JNI library is provided as an interface. This is only meant for inference—it provides the ability to load a graph, set up inputs, and run the model to calculate outputs.

Android 示例应用

在安卓上构建 TensorFlow

iOS

iOS 演示 APP

Core ML support

Core ML is a machine learning framework used in Apple products. In addition to using Tensorflow Lite models directly in your applications, you can convert trained Tensorflow models to the CoreML format for use on Apple devices. To use the converter, refer to the Tensorflow-CoreML converter documentation .

Raspberry Pi

Compile Tensorflow Lite for a Raspberry Pi by following the RPi build instructions This compiles a static library file ( .a ) used to build your app. There are plans for Python bindings and a demo app.

如果您发现本页面存在错误或可以改进,请 点击此处 帮助我们改进。

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文