|
Android Camera2.0 API实现摄像头预览并获取人脸关键坐标 Android 5.0(API Level 21)以后推出了新的camera2.0 API,原有的Camera1.0已被废弃,确实新的camera API有更好的架构,更低的耦合,可以使开发人员发挥更大的空间。
API简介
主要的类有以下几个:
1.CameraManager :所有camera的管理类,可以通过调用getSystemService()得到其实例,其中的方法getCameraCharacteristics()可以获取代表camera特征的类CameraCharacteristics的实例,该特征类中封装了摄像头的各种属性参数,比如是前置摄像头还是后置摄像头等等。
2.CameraDevice:代表一个摄像头,可以通过其方法createCaptureSession()和 createCaptureRequest()创建CameraCaptureSession以及CaptureRequest的对象实例。
3.CameraDevice.StateCallback:CameraDevice内部类,该类用于接收相机的连接状态的更新。比如当相机打开成功后会回调其中的onOpened方法,当相机连接断开时会回调其中的onDisconnected方法。
4.CameraCaptureSession:代表一次拍摄会话,通过setRepeatingRequest()可以开启摄像头预览,capture()方法可以拍照,还有两个内部类CameraCaptureSession.StateCallback以及CaptureCallback,和CamearDevice的StateCallback一样,可以监听预览或拍摄的过程中出现的一些情况。
人脸检测
主要使用android.hardware.camera2.params.Face,这是Camera2.0自带的一个类,可以在createCaptureSession()中从CaptureResult得到,该Face类中封装了代表人脸基本位置的矩形框,是一个Rect对象,其他还能返回的有两眼和嘴巴的位置,分别都是Point数组。
主要代码
我们使用TextureView作为摄像头预览输出的载体,创建一个类实现TextureView.SurfaceTextureListener接口,在重写的方法onSurfaceTextureAvailable()中开启摄像头
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
//设置Camera初始参数
setUpCamera();
//获取Surface表面
surfaceTexture = surface ;
//设置SurfaceTexture默认大小
surfaceTexture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
//开启后台线程
openBackgroundThread();
//开启相机
openCamera();
} |
一、首先是setUpCamera()函数设置摄像头的初始化参数,包括人脸检测的开启
/**
* 设置camera2.0的初始化参数
*/
private void setUpCamera() {
cameraManager = (CameraManager)mContext.getSystemService(Context.CAMERA_SERVICE);
try{
for (String id : cameraManager.getCameraIdList()) {
//获取代表摄像头特征类characteristics
characteristics = cameraManager.getCameraCharacteristics(id);
//如果是前置摄像头
if (characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT) {
mCameraId = id ;
StreamConfigurationMap streamConfigurationMap = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
sizes = streamConfigurationMap.getOutputSizes(SurfaceHolder.class);
//设置预览大小
mPreviewSize = sizes[ 0];
//获取人脸检测参数
int[] FD =characteristics.get(CameraCharacteristics.STATISTICS_INFO_AVAILABLE_FACE_DETECT_MODES);
int maxFD=characteristics.get(CameraCharacteristics.STATISTICS_INFO_MAX_FACE_COUNT);
if (FD.length> 0) {
List<Integer> fdList = new ArrayList<>();
for ( int FaceD : FD
) {
fdList.add(FaceD);
Log.e(TAG, "setUpCameraOutputs: FD type:" + Integer.toString(FaceD));
}
Log.e(TAG, "setUpCameraOutputs: FD count" + Integer.toString(maxFD));
if (maxFD > 0) {
mFaceDetectSupported = true;
mFaceDetectMode = Collections.max(fdList);
}
}
}
}
} catch ( CameraAccessException e ){
e.printStackTrace();
}
} |
二、然后openCamera()方法开启摄像头,在其中检查是否开启摄像头权限
/**
* 查看摄像头并开启摄像机
*/
public void openCamera(){
try {
//判断是否开启摄像头权限
if (PermissionChecker.checkSelfPermission( mContext , Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
cameraManager.openCamera(mCameraId, cameraCallback, mBackgroundHandler);
} else {
Toast.makeText( mContext , "请打开摄像头权限",Toast.LENGTH_SHORT).show();
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
} |
三、摄像头开启成功后,在CameraDevice.StateCallback的onOpened()方法中开启预览
//cameraCallback回调接口
private CameraDevice.StateCallback cameraCallback = new CameraDevice.StateCallback(){
//若摄像机打开成功则回调此方法
@Override
public void onOpened(CameraDevice camera) {
//获取cameraDevice
cameraDevice = camera;
//开启预览
startPreview();
}
//摄像机连接断开回调此方法
@Override
public void onDisconnected(CameraDevice camera) {
if(cameraDevice != null ){
cameraDevice.close();
}
}
//出现异常回调此方法
@Override
public void onError(CameraDevice camera, int error) {
if(cameraDevice != null ){
cameraDevice.close();
}
}
}; |
四、startPreview()方法开启预览,并打印出返回的人脸位置坐标
public void startPreview(){
try{
Surface surface = new Surface(surfaceTexture);
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);
/*previewRequestBuilder.addTarget(mImageReader.getSurface());*/
previewRequestBuilder.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE,
CameraMetadata.STATISTICS_FACE_DETECT_MODE_FULL);
cameraDevice.createCaptureSession(Arrays.asList(surface,mImageReader.getSurface()), new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession session) {
try {
//构建captureRequest对象
captureRequest = previewRequestBuilder.build();
//设置人脸检测
setFaceDetect(previewRequestBuilder,mFaceDetectMode);
captureSession = session;
captureSession.setRepeatingRequest(captureRequest, new CameraCaptureSession.CaptureCallback() {
/**
* 对摄像头返回的结果进行处理,并获取人脸数据
* @param result 摄像头数据
*/
private void process(CaptureResult result) {
//获得Face类
Face face[]=result.get(CaptureResult.STATISTICS_FACES);
//如果有人脸的话
if (face.length> 0 ){
Log.e(TAG, "face detected " + Integer.toString(face.length));
//获取人脸矩形框
Rect bounds = face[ 0].getBounds();
float y = mPreviewSize.getHeight()/ 2 - bounds.top ;
Log.e( "height" , String.valueOf(mPreviewSize.getWidth()));
Log.e( "top" , String.valueOf(y));
Log.e( "left" , String.valueOf(bounds.left));
Log.e( "right" , String.valueOf(bounds.right));
}
}
@Override
public void onCaptureStarted(CameraCaptureSession session, CaptureRequest request, long timestamp, long frameNumber) {
super.onCaptureStarted(session, request, timestamp, frameNumber);
}
@Override
public void onCaptureProgressed(CameraCaptureSession session, CaptureRequest request, CaptureResult partialResult) {
process(partialResult);
}
@Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
process(result);
}
},mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, null);
} catch (CameraAccessException e){
e.printStackTrace();
}
} |
最后在布局文件中添加一个TextureView,并在Activity中获取其视图,为其设置SurfaceTextureListener,也就是刚才自定义的类,就OK了。
----------------------------
原文链接:https://blog.csdn.net/SakuraMashiro/article/details/78334248
程序猿的技术大观园:www.javathinker.net
[这个贴子最后由 flybird 在 2020-03-09 22:53:27 重新编辑]
|
|