android Camera 數據流程分析

上一篇文章  android Camera — 架構簡介

        地址:  /kf/201202/119071.html 對其層次結構進行瞭簡要的介紹,

這篇文章主要針對其數據流程進行分析。Camera一般用於圖像瀏覽、拍照和視頻錄制。這裡先對圖像瀏覽和拍照的數據流進行分析,後面再對視頻電話部分進行分析。

 

1、針對HAL層對攝像頭數據處理補充一下

 

Linux中使用V4L2最為攝像頭驅動,V4L2在用戶空間通過各種ioctl調用進行控制,並且可以使用mmap進行內存映射

常用IOCTL函數介紹:
ioctl函數命令參數如下:
 .vidioc_querycap  = vidioc_querycap,    //查詢驅動功能
 .vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap,  //獲取當前驅動支持的視頻格式
 .vidioc_g_fmt_vid_cap  = vidioc_g_fmt_vid_cap,       //讀取當前驅動的頻捕獲格式
 .vidioc_s_fmt_vid_cap  = vidioc_s_fmt_vid_cap,       //設置當前驅動的頻捕獲格式
 .vidioc_try_fmt_vid_cap  = vidioc_try_fmt_vid_cap,   //驗證當前驅動的顯示格式
 .vidioc_reqbufs   = vidioc_reqbufs,                              //分配內存
 .vidioc_querybuf  = vidioc_querybuf,                           //把VIDIOC_REQBUFS中分配的數據緩存轉換成物理地址
 .vidioc_qbuf   = vidioc_qbuf,                                         //把數據從緩存中讀取出來
 .vidioc_dqbuf   = vidioc_dqbuf,                                    //把數據放回緩存隊列
 .vidioc_streamon  = vidioc_streamon,                    //開始視頻顯示函數
 .vidioc_streamoff  = vidioc_streamoff,                   //結束視頻顯示函數
 .vidioc_cropcap   = vidioc_cropcap,                       //查詢驅動的修剪能力
 .vidioc_g_crop   = vidioc_g_crop,                           //讀取視頻信號的矩形邊框
 .vidioc_s_crop   = vidioc_s_crop,                          //設置視頻信號的矩形邊框 
 .vidioc_querystd  = vidioc_querystd,                     //檢查當前視頻設備支持的標準,例如PAL或NTSC。

初始化的時候進行camera基礎參數的設置,然後調用mmap系統調用將camera驅動層的數據隊列映射到用戶空間

主要有兩個線程:
pictureThread 拍照線程
當用戶使用拍照的功能的時候,拍照線程被調用(非循環),檢測隊列中的幀數據,將幀數據從隊列中取出,
拍照的數據一定需要傳到JAVA層,所有可以將數據轉換成JPEG格式再上傳,也可以轉換成RGB的數據上傳給java層

previewThread 預覽線程
當預覽方法被調用的時候啟動預覽線程,循環的檢測隊列中是否有幀數據,如果幀數據存在,讀取幀數據,由於讀取的數據為YUV格式的數據,所有要將YUV數據轉換成RGB的送給顯示框架顯示,也可以將轉換過的數據送給視頻編碼模塊,編碼成功後儲存變成錄像的功能

所有上傳的數據處理都要經過dataCallback,除非實現瞭overlay

 

 

2、數據流控制

上一節瞭解的是其控制層次及邏輯,為瞭更好的理解其數據走向並且為以後優化,那麼非常有必要瞭解它。

以jpeg數據格式存儲為例:
註冊回調函數:
public final void takePicture(ShutterCallback shutter, PictureCallback raw,
        PictureCallback postview, PictureCallback jpeg) {
mShutterCallback = shutter;
mRawImageCallback = raw;
mPostviewCallback = postview;
mJpegCallback = jpeg;
       native_takePicture();       
}

處理回函數數據:
@Override
public void handleMessage(Message msg) {
switch(msg.what) {
   case CAMERA_MSG_SHUTTER: //有數據到達通知
   case CAMERA_MSG_RAW_IMAGE: //處理未壓縮照片函數
   case CAMERA_MSG_COMPRESSED_IMAGE:  //處理壓縮處理的照片函數
if (mJpegCallback != null) {
             mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
      }
      return ;
    case CAMERA_MSG_PREVIEW_FRAME: //處理預覽數據函數
    …
}

應用註冊回調函數:
android.hardware.Camera mCameraDevice; //JAVA層Camera對象
mCameraDevice.takePicture(mShutterCallback, mRawPictureCallback,
   mPostViewPictureCallback, new JpegPictureCallback(loc));

應用獲取數據流程:
private final class JpegPictureCallback implements PictureCallback {
public void onPictureTaken(
         final byte [] jpegData, final android.hardware.Camera camera) {
    …
   mImageCapture.storeImage(jpegData, camera, mLocation); 
   …
  }
}

private class ImageCapture {
private int storeImage(byte[] data, Location loc) {
ImageManager.addImage(
                mContentResolver,
                title,
                dateTaken,
                loc, // location from gps/network
                ImageManager.CAMERA_IMAGE_BUCKET_NAME, filename,
                null, data,
                degree);
}
}

–> 噢,這裡就是真正存儲數據的地方瞭,在android系統有四個地方可以存儲共同數據區,
ContentProvider,sharedpreference、file、sqlite這幾種方式,這裡利用的是file方式

//
// Stores a bitmap or a jpeg byte array to a file (using the specified
// directory and filename). Also add an entry to the media store for
// this picture. The title, dateTaken, location are attributes for the
// picture. The degree is a one element array which returns the orientation
// of the picture.
//
public static Uri addImage(ContentResolver cr, String title, long dateTaken,
        Location location, String directory, String filename,
        Bitmap source, byte[] jpegData, int[] degree) {
        …
        File file = new File(directory, filename);
        outputStream = new FileOutputStream(file);
        if (source != null) {
            source.compress(CompressFormat.JPEG, 75, outputStream);
            degree[0] = 0;
        } else {
            outputStream.write(jpegData);
            degree[0] = getExifOrientation(filePath);
        }
        …
}

holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
SURFACE_TYPE_PUSH_BUFFERS表明該Surface不包含原生數據,Surface用到的數據由其他對象提供,在Camera圖像預覽中就使用該類型的Surface,有Camera負責提供給預覽Surface數據,這樣圖像預覽會比較流暢。

ok,到這裡我們瞭解瞭JAVA層回調的流程,下面瞭解下JAVA-JNI-C++層數據的流程

這裡從底層往上層進行分析比較好:

1、CameraHardwareInterface提供的回調函數:

typedef void (*notify_callback)(int32_t msgType,
                                int32_t ext1,
                                int32_t ext2,
                                void* user);

typedef void (*data_callback)(int32_t msgType,
                              const sp<IMemory>& dataPtr,
                              void* user);

typedef void (*data_callback_timestamp)(nsecs_t timestamp,
                                        int32_t msgType,
                                        const sp<IMemory>& dataPtr,
                                        void* user);

接口如下:
/** Set the notification and data callbacks */
virtual void setCallbacks(notify_callback notify_cb,
                          data_callback data_cb,
                          data_callback_timestamp data_cb_timestamp,
                          void* user) = 0;

2、CameraService處理HAL的消息函數:
void CameraService::Client::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr, void* user)
{
 //…
switch (msgType) {———————————— 1 接收到HAL消息
        case CAMERA_MSG_PREVIEW_FRAME:
            client->handlePreviewData(dataPtr);
            break;
        case CAMERA_MSG_POSTVIEW_FRAME:
            client->handlePostview(dataPtr);
            break;
        case CAMERA_MSG_RAW_IMAGE:
            client->handleRawPicture(dataPtr);
            break;
        case CAMERA_MSG_COMPRESSED_IMAGE:
            client->handleCompressedPicture(dataPtr); ———  2 處理圖片壓縮消息
            –> c->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE, mem); ——– 3 調用如下回調函數
            break;
        default:
            if (c != NULL) {
                c->dataCallback(msgType, dataPtr);
            }
            break;
    }
//…
}

void CameraService::Client::notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2, void* user)
{
    LOGV("notifyCallback(%d)", msgType);

    sp<Client> client = getClientFromCookie(user);
    if (client == 0) {
        return;
    }

    switch (msgType) {
        case CAMERA_MSG_SHUTTER:
            // ext1 is the dimension of the yuv picture.
            client->handleShutter((image_rect_type *)ext1);
            break;
        default:
            sp<ICameraClient> c = client->mCameraClient;
            if (c != NULL) {
                c->notifyCallback(msgType, ext1, ext2); ————– 4 回調消息(服務端)
            }
            break;
    }
}

3、Client客戶端處理:
// callback from camera service when frame or image is ready ————-數據回調處理
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr)
{
    sp<CameraListener> listener;
    {
        Mutex::Autolock _l(mLock);
        listener = mListener;
    }
    if (listener != NULL) {
        listener->postData(msgType, dataPtr);
    }
}

// callback from camera service  ——————- 消息回調處理
void Camera::notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2)
{
    sp<CameraListener> listener;
    {
        Mutex::Autolock _l(mLock);
        listener = mListener;
    }
    if (listener != NULL) {
        listener->notify(msgType, ext1, ext2);
    }
}

4、JNI:android_hardware_Camera.cpp
// provides persistent context for calls from native code to Java
class JNICameraContext: public CameraListener
{

    virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
    virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr);
    …
}

數據通過JNI層傳給JAVA層,利用copyAndPost函數進行
void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr)
{
    // return data based on callback type
    switch(msgType) {
    case CAMERA_MSG_VIDEO_FRAME:
        // should never happen
        break;
    // don't return raw data to Java
    case CAMERA_MSG_RAW_IMAGE:
        LOGV("rawCallback");
        env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                mCameraJObjectWeak, msgType, 0, 0, NULL);
        break;
    default:
        // TODO: Change to LOGV
        LOGV("dataCallback(%d, %p)", msgType, dataPtr.get());
        copyAndPost(env, dataPtr, msgType);
        break;
    }
}

主要數據操作,此處利用IMemory進行數據的傳遞
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
{
// allocate Java byte array and copy data
    if (dataPtr != NULL) {
     sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
     uint8_t *heapBase = (uint8_t*)heap->base();
     //由應用管理buffer情形
     const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);
     obj = env->NewByteArray(size);
     env->SetByteArrayRegion(obj, 0, size, data);
    }
   
    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
}

註意這裡有個C++調用Java函數:
fields.post_event = env->GetStaticMethodID(clazz, "postEventFromNative",
                                           "(Ljava/lang/Object;IIILjava/lang/Object;)V");

Camera.java中定義:
private static void postEventFromNative(Object camera_ref,
                                        int what, int arg1, int arg2, Object obj)
{
    Camera c = (Camera)((WeakReference)camera_ref).get();
    if (c == null)
        return;

    if (c.mEventHandler != null) {
        Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
        c.mEventHandler.sendMessage(m); //由應用層調用takePicture處理回調的數據
    }
}
由於視頻的數據流較大,通過不會送到JAVA層進行處理,隻是通過設置輸出設備 Surface 本地處理即可。

ok,通過以上的分析,數據流已經走通瞭,下面舉例說明一下:

預覽功能:從startPreview開始,調用stopPreview結束
startPreview() –> startCameraMode() –> startPreviewMode()

status_t CameraService::Client::startPreviewMode()
{

    if (mUseOverlay) {
        // If preview display has been set, set overlay now.
        if (mSurface != 0) {
            ret = setOverlay(); –> createOverlay/setOverlay操作
        }
        ret = mHardware->startPreview();
    } else {    
     ret = mHardware->startPreview();
     // If preview display has been set, register preview buffers now.
      if (mSurface != 0) {
         // Unregister here because the surface registered with raw heap.
         mSurface->unregisterBuffers();
         ret = registerPreviewBuffers();
      }
    }
    …
}

這裡有個是否使用Overlay,通過讀取CameraHardwareInterface::useOverlay進行確定,使用Overlay
則數據流在Camera的硬件抽象層中處理,隻需要利用setOverlay把Overlay設備設置到其中即可。
–> mHardware->setOverlay(new Overlay(mOverlayRef));

如果沒有Overlay情況下,則需要從Camera的硬件中得到預覽內容的數據,然後調用ISurface的registerBuffers
將內存註冊到輸出設備ISurface中,最後通過SurfaceFlinger進行合成輸出。
–>
    // don't use a hardcoded format here
    ISurface::BufferHeap buffers(w, h, w, h,
                                 HAL_PIXEL_FORMAT_YCrCb_420_SP,
                                 mOrientation,
                                 0,
                                 mHardware->getPreviewHeap());

    status_t ret = mSurface->registerBuffers(buffers);

數據回調處理流程:
a、註冊dataCallback
mHardware->setCallbacks(notifyCallback,
                        dataCallback,
                        dataCallbackTimestamp,
                        mCameraService.get());

b、處理消息
void CameraService::Client::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr, void* user)
{
    case CAMERA_MSG_PREVIEW_FRAME:
        client->handlePreviewData(dataPtr);
        break;

}

c、輸出數據
// preview callback – frame buffer update
void CameraService::Client::handlePreviewData(const sp<IMemory>& mem)
{

//調用ISurface的postBuffer將視頻數據輸出
if (mSurface != NULL) {
      mSurface->postBuffer(offset);
  }
 
  // 調用ICameraClientr 的回調函數,將視頻數據回調到上層
  // Is the received frame copied out or not?
  if (flags & FRAME_CALLBACK_FLAG_COPY_OUT_MASK) {
      LOGV("frame is copied");
      copyFrameAndPostCopiedFrame(c, heap, offset, size);
  } else {
      LOGV("frame is forwarded");
      c->dataCallback(CAMERA_MSG_PREVIEW_FRAME, mem);
  }

}

 摘自 andyhuabing的專欄

發佈留言