美文网首页
tensorflow笔记:4.1 神经网络优化之损失函数

tensorflow笔记:4.1 神经网络优化之损失函数

作者: 九除以三还是三哦 | 来源:发表于2019-08-08 15:02 被阅读0次
  • 损失函数
    1. 损失函数定义了拟合结果和真实结果之间的差异,作为优化的目标直接关系模型训练的好坏
    2. 三大类:均方误差,自定义,交叉熵
  • 问题背景

预测酸奶日销量y。x1,x2是影响日销量的因素。
建模前,应预先采集的数据有:每日x1,x2和销量y_(即已知答案,最佳情况:产量=销量)
拟造数据集X,Y_: y_=x1+x2 噪声:-0.05~++0.05 拟合可以预测销量的函数

  • 代码 opt4_1.py,opt4_2.py,opt4_3.py
#coding:utf-8
#版本信息:ubuntu18.04  python3.6.8  tensorflow1.14.0
#作者:九除以三还是三哦  如有错误,欢迎评论指正!!
#0导入模块,生成模拟数据集。
import tensorflow as tf
import numpy as np
BATCH_SIZE=8
SEED=23455
#COST=1
#PORFIT=9

rdm=np.random.RandomState(SEED)
X=rdm.rand(32,2)
Y_=[[x1+x2+(rdm.rand()/10.0-0.05)]for (x1,x2)in X]#判断如果两个坐标的平方和小于2,给Y赋值1,其余赋值0

#1定义神经网络的输入、参数和输出,定义前向传播过程。
x=tf.compat.v1.placeholder(tf.float32,shape=(None,2))
y_=tf.compat.v1.placeholder(tf.float32,shape=(None,1))
w1=tf.Variable(tf.random.normal([2,1],stddev=1,seed=1))
y=tf.matmul(x,w1)

#2定义损失函数及反向传播方法。
#定义损失函数为MSE,反向传播方法为梯度下降。
loss_mse=tf.reduce_mean(tf.square(y_-y))
#loss_mse=tf.reduce_sum(tf.where(tf.greater(y,y_),(y-y_)*COST,(y_-y)*PROFIT)) #自定义损失函数
train_step=tf.compat.v1.train.GradientDescentOptimizer(0.001).minimize(loss_mse)

#3生成会话,训练STEPS轮
with tf.compat.v1.Session() as sess:
    init_op=tf.compat.v1.global_variables_initializer()
    sess.run(init_op)
    STEPS=20000
    for i in range(STEPS):
        start=(i*BATCH_SIZE)%32
        end=(i*BATCH_SIZE)%32+BATCH_SIZE
        sess.run(train_step,feed_dict={x:X[start:end],y_:Y_[start:end]})
        if i%500==0:
            print("After %d training steps,w1 is:"%(i))
            print(sess.run(w1),"\n")
    print("Final w1:\n",sess.run(w1))
  • 运行结果
    还是同样的警告,运行结果没有问题,基本符合1*x1+ 1 *x2
After 0 training steps,w1 is:
[[-0.80974597]
 [ 1.4852903 ]] 

After 500 training steps,w1 is:
[[-0.46074435]
 [ 1.641878  ]] 

After 1000 training steps,w1 is:
[[-0.21939856]
 [ 1.6984766 ]] 

After 1500 training steps,w1 is:
[[-0.04415595]
 [ 1.7003176 ]] 

After 2000 training steps,w1 is:
[[0.08942621]
 [1.673328  ]] 

After 2500 training steps,w1 is:
[[0.19583555]
 [1.6322677 ]] 

After 3000 training steps,w1 is:
[[0.28375748]
 [1.5854434 ]] 

After 3500 training steps,w1 is:
[[0.35848638]
 [1.5374472 ]] 

After 4000 training steps,w1 is:
[[0.42332518]
 [1.4907393 ]] 

After 4500 training steps,w1 is:
[[0.48040026]
 [1.4465574 ]] 

After 5000 training steps,w1 is:
[[0.53113604]
 [1.4054536 ]] 

After 5500 training steps,w1 is:
[[0.5765325]
 [1.3675941]] 

After 6000 training steps,w1 is:
[[0.61732584]
 [1.3329403 ]] 

After 6500 training steps,w1 is:
[[0.6540846]
 [1.3013426]] 

After 7000 training steps,w1 is:
[[0.6872685]
 [1.272602 ]] 

After 7500 training steps,w1 is:
[[0.71725976]
 [1.2465005 ]] 

After 8000 training steps,w1 is:
[[0.7443861]
 [1.2228197]] 

After 8500 training steps,w1 is:
[[0.7689324]
 [1.2013483]] 

After 9000 training steps,w1 is:
[[0.79115134]
 [1.1818889 ]] 

After 9500 training steps,w1 is:
[[0.811267 ]
 [1.1642567]] 

After 10000 training steps,w1 is:
[[0.8294814]
 [1.1482829]] 

After 10500 training steps,w1 is:
[[0.84597576]
 [1.1338125 ]] 

After 11000 training steps,w1 is:
[[0.8609128]
 [1.1207061]] 

After 11500 training steps,w1 is:
[[0.87444043]
 [1.1088346 ]] 

After 12000 training steps,w1 is:
[[0.88669145]
 [1.0980824 ]] 

After 12500 training steps,w1 is:
[[0.8977863]
 [1.0883439]] 

After 13000 training steps,w1 is:
[[0.9078348]
 [1.0795243]] 

After 13500 training steps,w1 is:
[[0.91693527]
 [1.0715363 ]] 

After 14000 training steps,w1 is:
[[0.92517716]
 [1.0643018 ]] 

After 14500 training steps,w1 is:
[[0.93264157]
 [1.0577497 ]] 

After 15000 training steps,w1 is:
[[0.9394023]
 [1.0518153]] 

After 15500 training steps,w1 is:
[[0.9455251]
 [1.0464406]] 

After 16000 training steps,w1 is:
[[0.95107025]
 [1.0415728 ]] 

After 16500 training steps,w1 is:
[[0.9560928]
 [1.037164 ]] 

After 17000 training steps,w1 is:
[[0.96064115]
 [1.0331714 ]] 

After 17500 training steps,w1 is:
[[0.96476096]
 [1.0295546 ]] 

After 18000 training steps,w1 is:
[[0.9684917]
 [1.0262802]] 

After 18500 training steps,w1 is:
[[0.9718707]
 [1.0233142]] 

After 19000 training steps,w1 is:
[[0.974931 ]
 [1.0206276]] 

After 19500 training steps,w1 is:
[[0.9777026]
 [1.0181949]] 

Final w1:
 [[0.98019385]
 [1.0159807 ]]
  • 拓展
    可以看到不同的损失函数对应的预测的结果也不一样


    opt4_1.png

    默认本金和利润一样时,预测结果趋近1,1

opt4_2.png

本金小于利润时,趋近1,1,但是会偏大于1,往多了预测


opt4_3.png

本金大于利润时,趋近1,1,但是会偏小于1,往小了预测

相关文章

网友评论

      本文标题:tensorflow笔记:4.1 神经网络优化之损失函数

      本文链接:https://www.haomeiwen.com/subject/kmmadctx.html