美文网首页
Tensorflow 初体验

Tensorflow 初体验

作者: YANWeichuan | 来源:发表于2018-08-16 17:00 被阅读0次

跑了一个完整tensorflow示例,通过定义简单的一个layer,搭建网络,定义训练参数,进行训练后即可进行较为精确的预测。很神奇的是,简单几行代码,即可得到一个相当精确的预测模型。

layer定义: 就是y = wx + b 简单的仿射变化,输入参数包括输入维数,输出维数,输入数据,激活函数

网络定义:

  • 输入层简单定义为一个placeholder,输入个数待定,维数为784个像素的图片数据
  • 隐藏层:layer,输入参数个数为784,输出参数个数256,输入数据为输入层x,激活函数ReLU
  • 输出层:输出10个数字,输入为隐藏层输出,无激活函数

训练参数:

  • 损失函数:reduce mean标准的
  • 优化器: adam优化器,参数为学习率,目标为最小化损失函数
  • 精度:准确预测的平均值

训练:

  • 迭代次数
  • 每次处理的数据
  • 运行优化器
  • 获取损失和精度(不是必须的)
  • 精度评估
  • 预测

精度提高:

  • 增加隐藏层,增加隐藏层的网络参数256->1000
  • 精度从0.94提高到0.96

数据可以直接从 http://yann.lecun.com/exdb/mnist/ 下载,放到相应的目录即可。

import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
import matplotlib.pyplot as plt
import numpy as np
from time import time
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

mnist = input_data.read_data_sets("data/MNIST_data/", one_hot = True)
print("     train: ", mnist.train.num_examples)
print("validation: ", mnist.validation.num_examples)
print("      test: ", mnist.test.num_examples)

print("image shape: ", mnist.train.images.shape)
print("label shape: ", mnist.train.labels.shape)

def show_image(image):
    plt.imshow(image.reshape(28, 28), cmap = 'binary')
    plt.show()

def plot_image_label_prediction(images, labels, prediction = [], idx = 0, num = 10):
    fig = plt.gcf()
    fig.set_size_inches(12, 14)
    if num > 25:
        num = 25
    for i in range(0, num):
        ax = plt.subplot(5, 5, 1 + i)
        ax.imshow(np.reshape(images[idx], (28, 28)), cmap="binary")
        title = "label = " + str(np.argmax(labels[idx]))
        if len(prediction) > 0:
            title += ", prediction = " + str(prediction[idx])
        ax.set_title(title, fontsize = 10)
        ax.set_xticks([])
        ax.set_yticks([])
        idx += 1
    plt.show()

#show_image(mnist.train.images[0])
print("labels]0]: ", mnist.train.labels[0])
print("labels[0]: ", np.argmax(mnist.train.labels[0]))
#plot_image_label_prediction(mnist.train.images, mnist.train.labels)
#batch_images, batch_labels = mnist.train.next_batch(batch_size = 100)
#plot_image_label_prediction(batch_images, batch_labels)

def layer(output_dim, input_dim, inputs, activation = None):
    W = tf.Variable(tf.random_normal([input_dim, output_dim]))
    b = tf.Variable(tf.random_normal([1, output_dim]))
    XWb = tf.matmul(inputs, W) + b
    if activation is None:
        outputs  = XWb
    else:
        outputs = activation(XWb)

    return outputs

x = tf.placeholder("float", [None, 784])
h1 = layer(output_dim = 256, input_dim = 784, inputs = x, activation = tf.nn.relu)
y_predict = layer(output_dim = 10, input_dim = 256, inputs = h1, activation = None)

y_label = tf.placeholder("float", [None, 10])
loss_function = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = y_predict, labels = y_label))
optimizer = tf.train.AdamOptimizer(learning_rate = 0.001).minimize(loss_function)
correct_predict = tf.equal(tf.argmax(y_label, 1), tf.argmax(y_predict, 1))
accuracy = tf.reduce_mean(tf.cast(correct_predict, "float"))

train_epochs = 15
batch_size = 100
total_batches = int(mnist.train.num_examples/batch_size)
loss_list = []
epoch_list = []
accuracy_list = []

start_time = time()

sess = tf.Session()
sess.run(tf.global_variables_initializer())

for epoch in range(train_epochs):
    for i in range(total_batches):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        sess.run(optimizer, feed_dict = {x: batch_x, y_label: batch_y})
    loss, acc = sess.run([loss_function, accuracy],
            feed_dict = {x: mnist.validation.images, y_label: mnist.validation.labels})
    epoch_list.append(epoch)
    loss_list.append(loss)
    accuracy_list.append(acc)
    print("Train Epoch: ", "%2d, " % (epoch + 1),
        "Loss = {:.9f}, ".format(loss),
        "Accuracy = ", acc)
duration = time() - start_time
print("Train finished takes: ", duration)

print("Accuracy: ", sess.run(accuracy, feed_dict={x:mnist.test.images, y_label:mnist.test.labels}))

prediction_result = sess.run(tf.argmax(y_predict, 1), feed_dict={x: mnist.test.images})
print("predict result: ", prediction_result[:10])
plot_image_label_prediction(mnist.test.images, mnist.test.labels, prediction_result, num = 25)

sess.close()

训练结果

Train Epoch:   1,  Loss = 6.796162605,  Accuracy =  0.8338
Train Epoch:   2,  Loss = 4.300796986,  Accuracy =  0.8828
Train Epoch:   3,  Loss = 3.252708912,  Accuracy =  0.9038
Train Epoch:   4,  Loss = 2.677492619,  Accuracy =  0.915
Train Epoch:   5,  Loss = 2.369313955,  Accuracy =  0.9196
Train Epoch:   6,  Loss = 2.162565947,  Accuracy =  0.9248
Train Epoch:   7,  Loss = 1.811923862,  Accuracy =  0.9334
Train Epoch:   8,  Loss = 1.715990782,  Accuracy =  0.933
Train Epoch:   9,  Loss = 1.545861244,  Accuracy =  0.9396
Train Epoch:  10,  Loss = 1.475827694,  Accuracy =  0.9406
Train Epoch:  11,  Loss = 1.449908972,  Accuracy =  0.9404
Train Epoch:  12,  Loss = 1.376323223,  Accuracy =  0.9424
Train Epoch:  13,  Loss = 1.375033021,  Accuracy =  0.9402
Train Epoch:  14,  Loss = 1.258133173,  Accuracy =  0.9446
Train Epoch:  15,  Loss = 1.269988537,  Accuracy =  0.9422
Train finished takes:  23.701005935668945
Accuracy:  0.9428

识别结果

相关文章

  • Tensorflow 初体验

    跑了一个完整tensorflow示例,通过定义简单的一个layer,搭建网络,定义训练参数,进行训练后即可进行较为...

  • yii初体验(7-15)

    yii初体验(7)视图 yii初体验(8)模块 yii初体验(9) 小部件widgets yii初体验(10) 前...

  • TensorFlow初体验之Mac版

    注意:本次体验基于docker【请选用最新版本的Docker for Mac】。 一、安装【一行命令搞定,但是要翻...

  • 动画篇-layout动画初体验

    动画篇-layout动画初体验 动画篇-layout动画初体验

  • TensorFlow准备

    1、关于TensorFlow 相关网站:TensorFlow、TensorFlow中文社区 TensorFlow™...

  • Tensorflow初识及安装

    本章节目录 Tensorflow简介Tensorflow相关网站 Tensorflow的安装Tensorflow学...

  • docker 镜像离线安装

    离线安装 Tensorflow docker pull tensorflow/tensorflow docker ...

  • 劳动主题画报

    一、实践画报(含体验日记) _____初体验(如:记者初体验、医生初体验等) 说明: 1.利用假期亲身体验一项工作...

  • 简约不简单

    初体验

  • 体验

    初体验

网友评论

      本文标题:Tensorflow 初体验

      本文链接:https://www.haomeiwen.com/subject/cssobftx.html