前段時光項目請求須要在聊天模塊中參加相似微信的藐視頻功效,這邊博客重要是為了總結碰到的成績息爭決辦法,願望可以或許對有異樣需求的同伙有所贊助。
後果預覽:

這裡先枚舉碰到的重要成績:
1.視頻剪裁 微信的藐視頻只是取了攝像頭獲得的一部門畫面
2.轉動預覽的卡頓成績 AVPlayer播放視頻在轉動中會湧現很卡的成績
接上去讓我們一步步來完成。
Part 1 完成視頻錄制
1.錄制類WKMovieRecorder完成
創立一個錄制類WKMovieRecorder,擔任視頻錄制。
@interface WKMovieRecorder : NSObject + (WKMovieRecorder*) sharedRecorder; - (instancetype)initWithMaxDuration:(NSTimeInterval)duration; @end
界說回調block
/** * 錄制停止 * * @param info 回調信息 * @param isCancle YES:撤消 NO:正常停止 */ typedef void(^FinishRecordingBlock)(NSDictionary *info, WKRecorderFinishedReason finishReason); /** * 核心轉變 */ typedef void(^FocusAreaDidChanged)(); /** * 權限驗證 * * @param success 能否勝利 */ typedef void(^AuthorizationResult)(BOOL success); @interface WKMovieRecorder : NSObject //回調 @property (nonatomic, copy) FinishRecordingBlock finishBlock;//錄制停止回調 @property (nonatomic, copy) FocusAreaDidChanged focusAreaDidChangedBlock; @property (nonatomic, copy) AuthorizationResult authorizationResultBlock; @end
界說一個cropSize用於視頻裁剪
@property (nonatomic, assign) CGSize cropSize;
接上去就是capture的完成了,這裡代碼有點長,懶得看的可以直接看前面的視頻剪裁部門
錄制設置裝備擺設:
@interface WKMovieRecorder ()
<
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureAudioDataOutputSampleBufferDelegate,
WKMovieWriterDelegate
>
{
AVCaptureSession* _session;
AVCaptureVideoPreviewLayer* _preview;
WKMovieWriter* _writer;
//暫停錄制
BOOL _isCapturing;
BOOL _isPaused;
BOOL _discont;
int _currentFile;
CMTime _timeOffset;
CMTime _lastVideo;
CMTime _lastAudio;
NSTimeInterval _maxDuration;
}
// Session management.
@property (nonatomic, strong) dispatch_queue_t sessionQueue;
@property (nonatomic, strong) dispatch_queue_t videoDataOutputQueue;
@property (nonatomic, strong) AVCaptureSession *session;
@property (nonatomic, strong) AVCaptureDevice *captureDevice;
@property (nonatomic, strong) AVCaptureDeviceInput *videoDeviceInput;
@property (nonatomic, strong) AVCaptureStillImageOutput *stillImageOutput;
@property (nonatomic, strong) AVCaptureConnection *videoConnection;
@property (nonatomic, strong) AVCaptureConnection *audioConnection;
@property (nonatomic, strong) NSDictionary *videoCompressionSettings;
@property (nonatomic, strong) NSDictionary *audioCompressionSettings;
@property (nonatomic, strong) AVAssetWriterInputPixelBufferAdaptor *adaptor;
@property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;
//Utilities
@property (nonatomic, strong) NSMutableArray *frames;//存儲錄制幀
@property (nonatomic, assign) CaptureAVSetupResult result;
@property (atomic, readwrite) BOOL isCapturing;
@property (atomic, readwrite) BOOL isPaused;
@property (nonatomic, strong) NSTimer *durationTimer;
@property (nonatomic, assign) WKRecorderFinishedReason finishReason;
@end
實例化辦法:
+ (WKMovieRecorder *)sharedRecorder
{
static WKMovieRecorder *recorder;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
recorder = [[WKMovieRecorder alloc] initWithMaxDuration:CGFLOAT_MAX];
});
return recorder;
}
- (instancetype)initWithMaxDuration:(NSTimeInterval)duration
{
if(self = [self init]){
_maxDuration = duration;
_duration = 0.f;
}
return self;
}
- (instancetype)init
{
self = [super init];
if (self) {
_maxDuration = CGFLOAT_MAX;
_duration = 0.f;
_sessionQueue = dispatch_queue_create("wukong.movieRecorder.queue", DISPATCH_QUEUE_SERIAL );
_videoDataOutputQueue = dispatch_queue_create( "wukong.movieRecorder.video", DISPATCH_QUEUE_SERIAL );
dispatch_set_target_queue( _videoDataOutputQueue, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0 ) );
}
return self;
}
2.初始化設置
初始化設置分離為session創立、權限檢討和session設置裝備擺設
1).session創立
self.session = [[AVCaptureSession alloc] init];
self.result = CaptureAVSetupResultSuccess;
2).權限檢討
//權限檢討
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined: {
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
if (granted) {
self.result = CaptureAVSetupResultSuccess;
}
}];
break;
}
case AVAuthorizationStatusAuthorized: {
break;
}
default:{
self.result = CaptureAVSetupResultCameraNotAuthorized;
}
}
if ( self.result != CaptureAVSetupResultSuccess) {
if (self.authorizationResultBlock) {
self.authorizationResultBlock(NO);
}
return;
}
3).session設置裝備擺設
session設置裝備擺設是須要留意的是AVCaptureSession的設置裝備擺設不克不及在主線程, 須要自行創立串行線程。
3.1.1 獲得輸出裝備與輸出流
AVCaptureDevice *captureDevice = [[self class] deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];
_captureDevice = captureDevice;
NSError *error = nil;
_videoDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
if (!_videoDeviceInput) {
NSLog(@"未找到裝備");
}
3.1.2 錄制幀數設置
幀數設置的重要目標是適配iPhone4,究竟是應當镌汰的機械了
int frameRate;
if ( [NSProcessInfo processInfo].processorCount == 1 )
{
if ([self.session canSetSessionPreset:AVCaptureSessionPresetLow]) {
[self.session setSessionPreset:AVCaptureSessionPresetLow];
}
frameRate = 10;
}else{
if ([self.session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
[self.session setSessionPreset:AVCaptureSessionPreset640x480];
}
frameRate = 30;
}
CMTime frameDuration = CMTimeMake( 1, frameRate );
if ( [_captureDevice lockForConfiguration:&error] ) {
_captureDevice.activeVideoMaxFrameDuration = frameDuration;
_captureDevice.activeVideoMinFrameDuration = frameDuration;
[_captureDevice unlockForConfiguration];
}
else {
NSLog( @"videoDevice lockForConfiguration returned error %@", error );
}
3.1.3 視頻輸入設置
視頻輸入設置須要留意的成績是:要設置videoConnection的偏向,如許能力包管裝備扭轉時的顯示正常。
//Video
if ([self.session canAddInput:_videoDeviceInput]) {
[self.session addInput:_videoDeviceInput];
self.videoDeviceInput = _videoDeviceInput;
[self.session removeOutput:_videoDataOutput];
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
_videoDataOutput = videoOutput;
videoOutput.videoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
[videoOutput setSampleBufferDelegate:self queue:_videoDataOutputQueue];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
if ( [_session canAddOutput:videoOutput] ) {
[_session addOutput:videoOutput];
[_captureDevice addObserver:self forKeyPath:@"adjustingFocus" options:NSKeyValueObservingOptionNew context:FocusAreaChangedContext];
_videoConnection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
if(_videoConnection.isVideoStabilizationSupported){
_videoConnection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
}
UIInterfaceOrientation statusBarOrientation = [UIApplication sharedApplication].statusBarOrientation;
AVCaptureVideoOrientation initialVideoOrientation = AVCaptureVideoOrientationPortrait;
if ( statusBarOrientation != UIInterfaceOrientationUnknown ) {
initialVideoOrientation = (AVCaptureVideoOrientation)statusBarOrientation;
}
_videoConnection.videoOrientation = initialVideoOrientation;
}
}
else{
NSLog(@"沒法添加視頻輸出到會話");
}
3.1.4 音頻設置
須要留意的是為了不丟幀,須要把音頻輸入的回調隊列放在串行隊列中
//audio
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if ( ! audioDeviceInput ) {
NSLog( @"Could not create audio device input: %@", error );
}
if ( [self.session canAddInput:audioDeviceInput] ) {
[self.session addInput:audioDeviceInput];
}
else {
NSLog( @"Could not add audio device input to the session" );
}
AVCaptureAudioDataOutput *audioOut = [[AVCaptureAudioDataOutput alloc] init];
// Put audio on its own queue to ensure that our video processing doesn't cause us to drop audio
dispatch_queue_t audioCaptureQueue = dispatch_queue_create( "wukong.movieRecorder.audio", DISPATCH_QUEUE_SERIAL );
[audioOut setSampleBufferDelegate:self queue:audioCaptureQueue];
if ( [self.session canAddOutput:audioOut] ) {
[self.session addOutput:audioOut];
}
_audioConnection = [audioOut connectionWithMediaType:AVMediaTypeAudio];
還須要留意一個成績就是關於session的設置裝備擺設代碼應當是如許的
[self.session beginConfiguration];
...設置裝備擺設代碼
[self.session commitConfiguration];
因為篇幅成績,前面的錄制代碼我就挑重點的講了。
3.2 視頻存儲
如今我們須要在AVCaptureVideoDataOutputSampleBufferDelegate與AVCaptureAudioDataOutputSampleBufferDelegate的回調中,將音頻和視頻寫入沙盒。在這個進程中須要留意的,在啟動session後獲得到的第一幀黑色的,須要廢棄。
3.2.1 創立WKMovieWriter類來封裝視頻存儲操作
WKMovieWriter的重要感化是應用AVAssetWriter拿到CMSampleBufferRef,剪裁後再寫入到沙盒中。
這是剪裁設置裝備擺設的代碼,AVAssetWriter會依據cropSize來剪裁視頻,這裡須要留意的一個成績是cropSize的width必需是320的整數倍,否則的話剪裁出來的視頻右邊會湧現一條綠色的線
NSDictionary *videoSettings;
if (_cropSize.height == 0 || _cropSize.width == 0) {
_cropSize = [UIScreen mainScreen].bounds.size;
}
videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:_cropSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:_cropSize.height], AVVideoHeightKey,
AVVideoScalingModeResizeaspectFill,AVVideoScalingModeKey,
nil];
至此,視頻錄制就完成了。
接上去須要處理的預覽的成績了
Part 2 卡頓成績處理
1.1 gif圖生成
經由過程查材料發明了這篇blog 引見說微信團隊處理預覽卡頓的成績應用的是播放圖片gif,然則博客中的示例代碼有成績,經由過程CoreAnimation來播放圖片招致內存暴跌而crash。然則,照樣給了我一些靈感,由於之前項目標啟動頁用到了gif圖片的播放,所以我就想能不克不及把視頻轉成圖片,然後再轉成gif圖停止播放,如許不就處理了成績了嗎。因而我開端谷歌工夫不負有心人找到了,圖片數組轉gif圖片的辦法。
gif圖轉換代碼
static void makeAnimatedGif(NSArray *images, NSURL *gifURL, NSTimeInterval duration) {
NSTimeInterval perSecond = duration /images.count;
NSDictionary *fileProperties = @{
(__bridge id)kCGImagePropertyGIFDictionary: @{
(__bridge id)kCGImagePropertyGIFLoopCount: @0, // 0 means loop forever
}
};
NSDictionary *frameProperties = @{
(__bridge id)kCGImagePropertyGIFDictionary: @{
(__bridge id)kCGImagePropertyGIFDelayTime: @(perSecond), // a float (not double!) in seconds, rounded to centiseconds in the GIF data
}
};
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)gifURL, kUTTypeGIF, images.count, NULL);
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)fileProperties);
for (UIImage *image in images) {
@autoreleasepool {
CGImageDestinationAddImage(destination, image.CGImage, (__bridge CFDictionaryRef)frameProperties);
}
}
if (!CGImageDestinationFinalize(destination)) {
NSLog(@"failed to finalize image destination");
}else{
}
CFRelease(destination);
}
轉換是轉換勝利了,然則湧現了新的成績,應用ImageIO生成gif圖片時會招致內存暴跌,剎時漲到100M以上,假如多個gif圖同時生成的話一樣會crash失落,為懂得決這個成績須要用一個串行隊列來停止gif圖的生成
1.2 視頻轉換為UIImages
重要是經由過程AVAssetReader、AVAssetTrack、AVAssetReaderTrackOutput 來停止轉換
//轉成UIImage
- (void)convertVideoUIImagesWithURL:(NSURL *)url finishBlock:(void (^)(id images, NSTimeInterval duration))finishBlock
{
AVAsset *asset = [AVAsset assetWithURL:url];
NSError *error = nil;
self.reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
__weak typeof(self)weakSelf = self;
dispatch_queue_t backgroundQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(backgroundQueue, ^{
__strong typeof(weakSelf) strongSelf = weakSelf;
NSLog(@"");
if (error) {
NSLog(@"%@", [error localizedDescription]);
}
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack =[videoTracks firstObject];
if (!videoTrack) {
return ;
}
int m_pixelFormatType;
// 視頻播放時,
m_pixelFormatType = kCVPixelFormatType_32BGRA;
// 其他用處,如視頻緊縮
// m_pixelFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange;
NSMutableDictionary *options = [NSMutableDictionary dictionary];
[options setObject:@(m_pixelFormatType) forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput *videoReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];
if ([strongSelf.reader canAddOutput:videoReaderOutput]) {
[strongSelf.reader addOutput:videoReaderOutput];
}
[strongSelf.reader startReading];
NSMutableArray *images = [NSMutableArray array];
// 要確保nominalFrameRate>0,之前湧現過Android拍的0幀視頻
while ([strongSelf.reader status] == AVAssetReaderStatusReading && videoTrack.nominalFrameRate > 0) {
@autoreleasepool {
// 讀取 video sample
CMSampleBufferRef videoBuffer = [videoReaderOutput copyNextSampleBuffer];
if (!videoBuffer) {
break;
}
[images addObject:[WKVideoConverter convertSampleBufferRefToUIImage:videoBuffer]];
CFRelease(videoBuffer);
}
}
if (finishBlock) {
dispatch_async(dispatch_get_main_queue(), ^{
finishBlock(images, duration);
});
}
});
}
在這裡有一個值得留意的成績,在視頻轉image的進程中,因為轉換時光很短,在短時光內videoBuffer不克不及夠實時獲得釋放,在多個視頻同時轉換時任然會湧現內存成績,這個時刻就須要用autoreleasepool來完成實時釋放
@autoreleasepool {
// 讀取 video sample
CMSampleBufferRef videoBuffer = [videoReaderOutput copyNextSampleBuffer];
if (!videoBuffer) {
break;
}
[images addObject:[WKVideoConverter convertSampleBufferRefToUIImage:videoBuffer]];
CFRelease(videoBuffer); }
至此,微信藐視頻的難點(我以為的)就處理了,至於其他的完成代碼請看demo就根本完成了,demo可以從這裡下載。
視頻暫停錄制 http://www.gdcl.co.uk/2013/02/20/iPhone-Pause.html
視頻crop綠邊處理 http://stackoverflow.com/questions/22883525/avassetexportsession-giving-me-a-green-border-on-right-and-bottom-of-output-vide
視頻裁剪:http://stackoverflow.com/questions/15737781/video-capture-with-11-aspect-ratio-in-IOS/16910263#16910263
CMSampleBufferRef轉image https://developer.apple.com/library/IOS/qa/qa1702/_index.html
微信藐視頻剖析 http://www.jianshu.com/p/3d5ccbde0de1
感激以上文章的作者
以上就是本文的全體內容,願望對年夜家的進修有所贊助,也願望年夜家多多支撐本站。
【手把手教你完成微信藐視頻iOS代碼完成】的相關資料介紹到這裡,希望對您有所幫助! 提示:不會對讀者因本文所帶來的任何損失負責。如果您支持就請把本站添加至收藏夾哦!