Trident

作者: 小松qxs | 来源:发表于2019-08-11 14:22 被阅读0次
titile Scale-Aware Trident Networks for Object Detection
url https://arxiv.org/pdf/1901.01892.pdf
动机 用multi-branch结构解决目标检测中Scale variation问题
内容 当前方法:
1、multi-scale image pyramids/SNIP:inference速度慢。
2、feature pyramid/SSD/FPN:
SSD:bottom layers语义信息少,小目标靠bottom layer预测,小目标效果差。
FPN:top-down connection使bottom layers增加语义信息,小目标效果有所改善,不同的feature层关注不同大小的目标,大目标和小目标特征的语义信息和细节信息不同,因此检测效果不同,仍没有image pyramids效果好。

backbone影响因素:network depth(structure),downsample rate和receptive field,这篇文章主要关注receptive field。

Trident(不同感受野的分支,scale-aware的训练方法,分支间的权重共享)
1、第一次研究receptive field在目标检测任务的影响。(Dilated convolution)
2、Trident:multi- branch structure和scale-aware training, scale-specific feature maps with a uni- form representational power。
3、weight-sharing,inference没有additional parameters and computational cost。
4、mAP of 48.4 using a single model with ResNet- 101 as backbone。

Investigation of Receptive Field: 1、感受野的大小和不同尺度目标的预测效果有关联。
2、尽管ResNet-101有足够大的感受野区域,但是增加dilation,仍然有利于大目标的检测,因此可以发现有效感受野比理论感受野使要小的,之前有效感受野需要balance大小目标,加大dilation大目标有效感受野增大,小目标不变。

Trident Network: Network Structure:
1、parallel branches of convolutions with the same parameters but with different dilation rates。
2、Multi-branch Block:与之前的卷积层相同只是dilation rate不同,ResNet中1x1,3x3,1x1的结构,改变3x3的dilation rate。
3、Weight sharing among branches:parameter数量增加,易过拟合,weight sharing可以使当前网络与原始网络参数相同,不同尺度的目标通过相同的 transformation with the same representational power,transformation parameters 可以在所有分支(更多的目标)上训练。

Scale-aware Training Scheme: Inference and Approximation:
1、每一层去除尺寸范围外的box,然后所有层的box NMS或者soft-NMS。
2、Fast Inference Approximation:加速inference,只使用major branch进行预测,去除valid range的限制。
实验 Implementation Details.:
1、batch size:16,GPUs:8,12 epochs,learning rate:0.02,decreased by a factor of 0.1 after the 8th and 10th epoch。
2、backbone:ResNet,valid range [0, 90], [30, 160] and [90, ∞]

Ablation Studies:

Components of TridentNet
1、Multi-branch:simplest multi-branch design could benefit from different receptive fields。
2、Scale-aware。
3、Weight-sharing。 Number of branche
Stage of Trident blocks
conv2 and conv3 feature maps are not large enough to widen the discrepancy of receptive fields between three branches

Number of trident blocks Performance of each branch 如果采用single branch近似inference,那么所有样本在所有branch都训练结果最好

Comparison with State-of-the-Arts:
Fast approximation
The major branch:42.2/47.6 AP for TridentNet and TridentNet*,three-branch methods :(42.7/48.4)
Compare with FPN
Trident with the same representation power
思考

相关文章

网友评论

      本文标题:Trident

      本文链接:https://www.haomeiwen.com/subject/evbjjctx.html