美文网首页
iOS-人脸识别、文本检测新框架Vision

iOS-人脸识别、文本检测新框架Vision

作者: 坤哥爱卿 | 来源:发表于2019-11-01 15:51 被阅读0次

引言

iOS 11+和macOS 10.13+ 新出了Vision框架,Face Detection and Recognition(人脸检测识别)、Machine Learning Image Analysis(机器学习图片分析)、Barcode Detection(条形码检测)、Text Detection(文本检测)、目标跟踪等功能,它是基于Core ML的。
Vision可以实现如下的效果
1、图像匹配(上篇文章中的效果)
2、矩形检测
3、二维码、条码检测
4、目标跟踪
5、文字检测
6、人脸检测
7、人脸面部特征检测

一、API架构

1.1 Vision框架一共包括以下类:
VNRequestHandler :继承自NSObject的VNImageRequestHandler 和 VNSequenceRequestHandler。

VNRequest :图像分析请求的抽象超类

VNObservation : 图像分析结果的抽象超类

VNFaceLandmarks :面部信息类

VNError : 错误信息类

VNUtils :工具类

VNTypes

两个协议:VNRequestRevisionProviding 和 VNFaceObservationAccepting

Request类型
图像分析请求的抽象超类 图像分析结果的抽象超类
1.2 Vision支持的图片类型
通过查看VNRequestHandler.h文件,我们可以看到里面的所有初始化函数,通过这些初始化函数,我们可以了解到支持的类型有:
1、CVPixelBufferRef
2、CGImageRef
3、CIImage
4、NSURL
5、NSData

二、使用

#pragma mark - Method

- (void)initAVCapturWritterConfig
{
    self.session = [[AVCaptureSession alloc] init];
    
    //视频
    AVCaptureDevice *videoDevice = [self deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionFront];

    if (videoDevice.isFocusPointOfInterestSupported && [videoDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
        [videoDevice lockForConfiguration:nil];
        [videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
        [videoDevice unlockForConfiguration];
    }
    
    AVCaptureDeviceInput *cameraDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:videoDevice error:nil];
    
    if ([self.session canAddInput:cameraDeviceInput]) {
        [self.session addInput:cameraDeviceInput];
    }
    
    //视频
    self.videoOutPut = [[AVCaptureVideoDataOutput alloc] init];
    NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey, nil];
    [self.videoOutPut setVideoSettings:outputSettings];
    if ([self.session canAddOutput:self.videoOutPut]) {
        [self.session addOutput:self.videoOutPut];
    }
    self.videoConnection = [self.videoOutPut connectionWithMediaType:AVMediaTypeVideo];
    self.videoConnection.enabled = NO;
    [self.videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
    
    //初始化预览图层
    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
    
}

// 获取设备
- (AVCaptureDevice *)deviceWithMediaType:(NSString *)mediaType preferringPosition:(AVCaptureDevicePosition)position
{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:mediaType];
    AVCaptureDevice *captureDevice = devices.firstObject;
    
    for ( AVCaptureDevice *device in devices ) {
        if ( device.position == position ) {
            captureDevice = device;
            break;
        }
    }
    return captureDevice;
}

- (void)setUpSubviews
{
    //容器
    self.realTimeView = [[UIView alloc] initWithFrame:self.view.bounds];
    [self.view addSubview:self.realTimeView];
    
    //实时图像预览
    self.previewLayer.frame = self.realTimeView.frame;
    [self.realTimeView.layer addSublayer:self.previewLayer];
    
    self.maskView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"2"]];
    self.maskView.hidden = YES;
    [self.realTimeView addSubview:self.maskView];
    
}

- (void)initVN
{
    //人脸识别
    self.faceRequest = [[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
        
        VNDetectFaceLandmarksRequest *faceRequest = (VNDetectFaceLandmarksRequest*)request;
        
        VNFaceObservation *firstObservation = [faceRequest.results firstObject];
        
        dispatch_async(dispatch_get_main_queue(), ^{
            
            if (firstObservation) {
                
                CGRect boundingBox = [firstObservation boundingBox];
                
                CGRect rect = VNImageRectForNormalizedRect(boundingBox,self.realTimeView.frame.size.width,self.realTimeView.frame.size.height);
                CGRect frame = CGRectMake(self.realTimeView.frame.size.width - rect.origin.x - rect.size.width, self.realTimeView.frame.size.height - rect.origin.y - rect.size.height, rect.size.width, rect.size.height);
                self.maskView.frame = frame;
                self.maskView.hidden = NO;
            }
            else {
                self.maskView.hidden = YES;
            }
        });
        
    }];

}

- (void)startVideoCapture
{
    [self.session startRunning];
    self.videoConnection.enabled = YES;
    self.videoQueue = dispatch_queue_create("videoQueue", NULL);
    [self.videoOutPut setSampleBufferDelegate:self queue:self.videoQueue];
}

- (void)stopVideoCapture
{
    [self.videoOutPut setSampleBufferDelegate:nil queue:nil];
    self.videoConnection.enabled = NO;
    self.videoQueue = nil;
    [self.session stopRunning];
}

#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    VNImageRequestHandler *handler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:@{}];
    NSError *error;
    [handler performRequests:@[self.faceRequest] error:&error];
}

相关文章

网友评论

      本文标题:iOS-人脸识别、文本检测新框架Vision

      本文链接:https://www.haomeiwen.com/subject/vicfbctx.html