layer.h & layer.c 定义了深度学习基本元素——层的各种属性&操作
| 类型 | 名称 | 意义 |
|---|---|---|
| enum LAYER_TYPE | type | 层的类型 |
| enum ACTIVATION | activation | 激活函数类型 |
| COST_TYPE | cost_type | 代价函数类型 |
| void * | forward(...) | 前馈函数指针 |
| void * | backward(...) | 反馈函数指针 |
| void * | update(...) | 参数更新函数指针 |
| void * | forward_gpu(...) | gpu 前馈函数指针 |
| void * | backward_gpu(...) | gpu 反馈函数指针 |
| void * | update_gpu(...) | gpu 参数更新函数指针 |
| int | batch_normalize | 是否进行 batch_normalize |
| int | shortcut | ? |
| int | batch | 批量大小 |
| int | forced | ? |
| int | flipped | 是否是翻转的 |
| int | inputs | 输入feature map 元素个数(1个batch) |
| int | outputs | 输出feature map 元素个数(1个batch) |
| int | nweights | filter 中 weight 元素个数 |
| int | nbiases | filter 中 bias 元素个数 |
| int | extra | ? |
| int | truths | ground truth box 数目( e.g. 30*5) |
| int | h,w,c | 输入feature map 的高度,宽度,厚度(channel数) |
| int | out_h,out_w,out_c | 输出feature map 的高度,宽度,厚度(channel数) |
| int | n | 不同层有不同含义,region_layer 表示每个cell boxes 数目,route_layer 表示有多少个层参与拼接 |
| int | max_boxes | 最大 ground truth box数目 |
| int | groups | ?softmax_tree 相关 |
| int | size | ?可能指一个预测的box的元素个数(=coords+classes+1) |
| int | side | ?可能是最后一层feature map长宽 |
| int | stride | 滑动步长大小 |
| int | reverse | 是否翻转 |
| int | flatten | 是否摊平 |
| int | pad | feature map 补0数目 |
| int | sqrt | 是否开根号 |
| int | flip | 是否翻转 |
| int | binary | 是否二进制权重 |
| int | xnor | 是否二进制权重&feature_map |
| int | steps | 迭代次数 |
| int | hidden | ? |
| int | truth | ?是否ground truth |
| float | smooth | ?是否平滑 |
| float | dot | ? |
| float | angle | 角度调整幅度 |
| float | jitter | 抖动调整幅度 |
| float | saturation | 饱和度调整幅度 |
| float | exposure | 过曝调整幅度 |
| float | shift | 平移调整幅度 |
| float | ration | 长宽调整幅度 |
| float | learning_rate_scale | 学习速率 scale 比例 |
| int | softmax | ?是否采用softmax |
| int | classes | 类别数目(20) |
| int | coords | 坐标个数(4) |
| int | background | ?是否是背景 |
| int | rescore | 是否重定义分数(loss中 has object 的 confidence loss,0则回归到1,1则回归到IOU) |
| int | objectness | ? |
| int | does_cost | ?计算cost |
| int | joint | ?链接 |
| int | noadjust | ?不调整 |
| int | reorg | ?是否重新组织顺序 |
| int | log | ?取对数 |
| int | adam | 是否采用adam sgd |
| float | B1 | ?adam参数 |
| float | B2 | ? |
| float | eps | ? |
| int | t | ? |
| float | alpha | ? |
| float | beta | ? |
| float | kappa | ?adam参数 |
| float | coord_scale | loss中坐标loss系数 |
| float | object_scale | loss中有物体loss系数 |
| float | noobject_scale | loss中没有物体loss系数 |
| float | class_scale | loss中物体分类softmax loss系数 |
| int | bias_match | ? |
| int | random | ? |
| float | thresh | 阈值 |
| int | classfix | ? |
| int | absolute | ? |
| int | onlyforward | 是否只进行前馈操作 |
| int | stopbackward | 是否停止反馈操作 |
| int | dontload | ? |
| int | dontloadscales | ? |
| float | temperature | ? |
| float | probability | ?准确率 |
| float | scale | ?scale比例 |
| char * | cweights | ?滤波器weight参数值 |
| int * | indexes | 数据坐标值(在max pool中用来指示输入feature map的) |
| int * | input_layers | |
| int * | input_sizes | |
| int * | map | |
| float * | rand | |
| float * | cost | |
| float * | state | ?RNN LSTM |
| float * | prev_state | ? |
| float * | forgot_state | ? |
| float * | forgot_delta | ? |
| float * | state_delta | ?RNN LSTM |
| float * | concat | ? |
| float * | concat_delta | ? |
| float * | binary_weights | 滤波器二进制weights值 |
| float * | biases | 滤波器biases参数值 |
| float * | bias_updates | 滤波器biases参数更新大小 |
| float * | scales | ?scale值 |
| float * | scale_updates | ?scale更新值 |
| float * | weights | 滤波器weight参数值 |
| float * | weight_updates | 滤波器weight参数更新大小 |
| float * | delta | ???????梯度 |
| float * | output | 输出 feature map值 (注意没有输入feature map值,因为一般就是上一层的输出,从network中拿) |
| float * | squared | ?平方值 |
| float * | norms | ?l1 l2正则化值 |
| float * | spatial_mean | ?空间平均值 |
| float * | mean | ?平均值 |
| float * | variance | ?方差 |
| float * | mean_delta | ?平均梯度 |
| float * | variance_delta | ?方差梯度 |
| float * | rolling_mean | ?波动平均 |
| float * | rolling_variance | ?波动方差 |
| float * | x | ? |
| float * | x_norm | ? |
| float * | m | ? |
| float * | v | ? |
| float * | bias_m | |
| float * | bias_v | |
| float * | scale_m | |
| float * | scale_v | |
| float * | z_cpu | |
| float * | r_cpu | |
| float * | h_cpu | |
| float * | binary_input | |
| struct layer * | input_layer | |
| struct layer * | self_layer | |
| struct layer * | output_layer | |
| struct layer * | input_gate_layer | RNN LSTM |
| struct layer * | state_gate_layer | RNN LSTM |
| struct layer * | input_save_layer | RNN LSTM |
| struct layer * | state_save_layer | RNN LSTM |
| struct layer * | input_state_layer | RNN LSTM |
| struct layer * | state_state_layer | RNN LSTM |
| struct layer * | input_z_layer | RNN LSTM |
| struct layer * | state_z_layer | RNN LSTM |
| struct layer * | input_r_layer | RNN LSTM |
| struct layer * | state_r_layer | RNN LSTM |
| struct layer * | input_h_layer | RNN LSTM |
| struct layer * | state_h_layer | RNN LSTM |
| tree * | softmax_tree | “界门纲目科属种”树,分词树,trie树 |
| size_t | workspace_size |
然后 layer.c 中实现了释放“对象”(内存)的函数,此处不再赘述









网友评论