0 Intro
tf2.0 提供了eager execution, intuitive higher-level APIs, and flexible model building on any platform等功能.
See the GPU guide for CUDA®-enabled cards.
@tf.function
1 Effective TensorFlow 2.0
1.1 summary of major changes
- Api clean up
remove tf.app, tf.flags, tf.logging instead of absl-py and so on. - Eager execution以后用@tf.function
- No more globals如果找不到tf.Variable,这个变量就会被gc
- Functions, not sessions
比较:
# TensorFlow 1.X
outputs = session.run(f(placeholder), feed_dict={placeholder: input})
# TensorFlow 2.0
outputs = f(input)
通过AutoGraph机制,自动转换
* `for`/`while` -> [`tf.while_loop`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/while_loop) (`break` and `continue` are supported)
* `if` -> [`tf.cond`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/cond)
* `for _ in dataset` -> `dataset.reduce`
1.2 Recommendations for TensorFlow 2.0
- Refactor your code into smaller functions
- Use keras layers and models to manange variables如
# Each layer can be called, with a signature equivalent to linear(x)
layers = [tf.keras.layers.Dense(hidden_size, activation=tf.nn.sigmoid) for _ in range(n)]
perceptron = tf.keras.Sequential(layers)
# layers[3].trainable_variables => returns [w3, b3]
# perceptron.trainable_variables => returns [w0, b0, ...]
Keras layers/models inherit from tf.train.Checkpointable and are integrated with @tf.function, which makes it possible to directly checkpoint or export SavedModels from Keras objects. You do not necessarily have to use Keras's .fit() API to take advantage of these integrations.
简单来说就是Keras layers/models 可以直接export
trunk = tf.keras.Sequential([...])
head1 = tf.keras.Sequential([...])
head2 = tf.keras.Sequential([...])
path1 = tf.keras.Sequential([trunk, head1])
path2 = tf.keras.Sequential([trunk, head2])
# Train on primary dataset
for x, y in main_dataset:
with tf.GradientTape() as tape:
prediction = path1(x)
loss = loss_fn_head1(prediction, y)
# Simultaneously optimize trunk and head1 weights.
gradients = tape.gradient(loss, path1.trainable_variables)
optimizer.apply_gradients(zip(gradients, path1.trainable_variables))
使用tape.gradient()和optimizer.apply_gradients()
- Combine tf.data.Datasets and @tf.function
Datasets are iterables (not iterators), and work just like other Python iterables in Eager mode. You can fully utilize dataset async prefetching/streaming features by wrapping your code intf.function(), which replaces Python iteration with the equivalent graph operations using AutoGraph.
可以用Datasets包读取流数据,Keras .fit() API可以直接调用Datasets类别的数据 - Take advantage of AutoGraph with Python control flow
- tf.metrics aggregates data and tf.summary logs them
- Use tf.config.experimental_run_functions_eagerly() when debugging
@tf.function
def f(x):
if x > 0:
import pdb
pdb.set_trace()
x = x + 1
return x
tf.config.experimental_run_functions_eagerly(True)
f(tf.constant(1))
2 Migrate from TF 1 to TF 2 and Convert with the upgrade script
https://www.tensorflow.org/beta/guide/migration_guide
https://www.tensorflow.org/beta/guide/upgrade
3 Get started for beginners
from __future__ import absolute_import, division, print_function, unicode_literals
# Install TensorFlow
!pip install -q tensorflow==2.0.0-beta1
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
特点:定义model; model.compile(); model.fit(); model.evaluate();
4 Get started for experts
!pip install -q tensorflow==2.0.0-beta1
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
@tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print (template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
特点:使用tf.data.Dataset.from_tensor_slices()分batch;继承Model类生成自己的类;定义loss: tf.keras.losses.SparseCategoricalCrossentropy(); optimizer: tf.keras.optimizers.Adam(); 用 tf.GradientTape进行训练;








网友评论