上一篇我們講瞭mediaplayer播放的第一步驟setdataSource,下面我們來講解preparesync的流程,在prepare前我們還有setDisplay這一步,即獲取surfacetexture來進行畫面的展示
setVideoSurface(JNIEnv *env, jobject thiz, jobject jsurface, jboolean mediaPlayerMustBeAlive)
{
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
………
sp<ISurfaceTexture> new_st;
if (jsurface) {
sp<Surface> surface(Surface_getSurface(env, jsurface));
if (surface != NULL) {
new_st = surface->getSurfaceTexture();
—通過surface獲取surfaceTexture
new_st->incStrong(thiz);
……….
}………….
mp->setVideoSurfaceTexture(new_st);
}
為什麼用surfaceTexture不用surface來展示呢?ICS之前都用的是surfaceview來展示video或者openGL的內容,surfacaview render在surface上,textureview render在surfaceTexture,textureview和surfaceview 這兩者有什麼區別呢?surfaceview跟應用的視窗不是同一個視窗,它自己new瞭一個window來展示openGL或者video的內容,這樣做有一個好處就是不用重繪應用的視窗,本身就可以不停的更新,但這也帶來一些局限性,surfaceview不是依附在應用視窗中,也就不能移動、縮放、旋轉,應用ListView或者 ScrollView就比較費勁。Textureview就很好的解決瞭這些問題。它擁有surfaceview的一切特性外,它也擁有view的一切行為,可以當個view使用。
獲取完surfaceTexture,我們就可以prepare/prepareAsync瞭,先給大夥看個大體時序圖吧:
JNI的部分我們跳過,直接進入libmedia下的mediaplayer.cpp的 prepareAsync_l方法,prepare是個同步的過程,所以要加鎖,prepareAsync_l後綴加_l就是表面是同步的過程。
status_t MediaPlayer::prepareAsync_l()
{
if ( (mPlayer != 0) && ( mCurrentState & ( MEDIA_PLAYER_INITIALIZED | MEDIA_PLAYER_STOPPED) ) ) {
mPlayer->setAudioStreamType(mStreamType);
mCurrentState = MEDIA_PLAYER_PREPARING;
return mPlayer->prepareAsync();
}
ALOGE("prepareAsync called in state %d", mCurrentState);
return INVALID_OPERATION;
}
在上面的代碼中,我們看到有個mPlayer,看過前一章的朋友都會記得,就是我們從Mediaplayerservice獲得的BpMediaplayer.通過BpMediaplayer我們就可以長驅直入,直搗Awesomeplayer這條幹實事的黃龍,前方的mediaplayerservice:client和stagefrightplayer都是些通風報信的料,不值得我們去深入研究,無非是些接口而已。進入瞭prepareAsync_l方法,我們的播放器所處的狀態就是MEDIA_PLAYER_PREPARING瞭。好瞭,我們就來看看Awesomeplayer到底做瞭啥吧.
代碼定位於:frameworks/av/media/libstagefright/Awesomeplayer.cpp
先看下prepareAsync_l吧:
status_t AwesomePlayer::prepareAsync_l() {
if (mFlags & PREPARING) {
return UNKNOWN_ERROR; // async prepare already pending
}
if (!mQueueStarted) {
mQueue.start();
mQueueStarted = true;
}
modifyFlags(PREPARING, SET);
mAsyncPrepareEvent = new AwesomeEvent(
this, &AwesomePlayer::onPrepareAsyncEvent);
mQueue.postEvent(mAsyncPrepareEvent);
return OK;
}
這裡我們涉及到瞭TimeEventQueue,即時間事件隊列模型,Awesomeplayer裡面類似Handler的東西,它的實現方式是把事件響應時間和事件本身封裝成一個queueItem,通過postEvent 插入隊列,時間到瞭就會根據事件id進行相應的處理。
首先我們來看下TimeEventQueue的start(mQueue.start();)方法都幹瞭什麼:
frameworks/av/media/libstagefright/TimedEventQueue.cpp
void TimedEventQueue::start() {
if (mRunning) {
return;
}
……..
pthread_create(&mThread, &attr, ThreadWrapper, this);
………
}
目的很明顯就是在主線程創建一個子線程,可能很多沒有寫過C/C++的人對ptread_create這個創建線程的方法有點陌生,我們就來分析下:
int pthread_create(pthread_t *thread, pthread_addr_t *arr,
void* (*start_routine)(void *), void *arg);
thread :用於返回創建的線程的ID
arr : 用於指定的被創建的線程的屬性
start_routine : 這是一個函數指針,指向線程被創建後要調用的函數
arg : 用於給線程傳遞參數
分析完瞭,我們就看下創建線程後調用的函數ThreadWrapper吧:
// static
void *TimedEventQueue::ThreadWrapper(void *me) {
……
static_cast<TimedEventQueue *>(me)->threadEntry();
return NULL;
}
跟蹤到threadEntry:
frameworks/av/media/libstagefright/TimedEventQueue.cpp
void TimedEventQueue::threadEntry() {
prctl(PR_SET_NAME, (unsigned long)"TimedEventQueue", 0, 0, 0);
for (;;) {
int64_t now_us = 0;
sp<Event> event;
{
Mutex::Autolock autoLock(mLock);
if (mStopped) {
break;
}
while (mQueue.empty()) {
mQueueNotEmptyCondition.wait(mLock);
}
event_id eventID = 0;
for (;;) {
if (mQueue.empty()) {
// The only event in the queue could have been cancelled
// while we were waiting for its scheduled time.
break;
}
List<QueueItem>::iterator it = mQueue.begin();
eventID = (*it).event->eventID();
……………………………
static int64_t kMaxTimeoutUs = 10000000ll; // 10 secs
……………..
status_t err = mQueueHeadChangedCondition.waitRelative(
mLock, delay_us * 1000ll);
if (!timeoutCapped && err == -ETIMEDOUT) {
// We finally hit the time this event is supposed to
// trigger.
now_us = getRealTimeUs();
break;
}
}
……………………….
event = removeEventFromQueue_l(eventID);
}
if (event != NULL) {
// Fire event with the lock NOT held.
event->fire(this, now_us);
}
}
}
從代碼我們可以瞭解到,主要目的是檢查queue是否為空,剛開始肯定是為空瞭,等待隊列不為空時的條件成立,即有queueIten進入進入隊列中。這個事件應該就是
mQueue.postEvent(mAsyncPrepareEvent);
在講postEvent前,我們先來看看mAsyncPrepareEvent這個封裝成AwesomeEvent的Event。
struct AwesomeEvent : public TimedEventQueue::Event {
AwesomeEvent(
AwesomePlayer *player,
void (AwesomePlayer::*method)())
: mPlayer(player),
mMethod(method) {
}
從這個結構體我們可以知道當這個event被觸發時將會執行Awesomeplayer的某個方法,我們看下mAsyncPrepareEvent:
mAsyncPrepareEvent = new AwesomeEvent(
this, &AwesomePlayer::onPrepareAsyncEvent);
mAsyncPrepareEvent被觸發時也就觸發瞭onPrepareAsyncEvent方法。
好瞭,回到我們的postEvent事件,我們開始說的TimeEventQueue,即時間事件隊列模型,剛剛我們說瞭Event, 但是沒有看到delay time啊?會不會在postEvent中加入呢?跟下去看看:
TimedEventQueue::event_id TimedEventQueue::postEvent(const sp<Event> &event) {
// Reserve an earlier timeslot an INT64_MIN to be able to post
// the StopEvent to the absolute head of the queue.
return postTimedEvent(event, INT64_MIN + 1);
}
終於看到delay時間瞭INT64_MIN + 1。重點在postTimedEvent,它把post過來的event和時間封裝成queueItem加入隊列中,並通知Queue為空的條件不成立,線程解鎖,允許thread繼續進行,經過delay time後pull event_id所對應的event。
frameworks/av/media/libstagefright/TimedEventQueue.cpp
TimedEventQueue::event_id TimedEventQueue::postTimedEvent(
const sp<Event> &event, int64_t realtime_us) {
Mutex::Autolock autoLock(mLock);
event->setEventID(mNextEventID++);
………………….
QueueItem item;
item.event = event;
item.realtime_us = realtime_us;
if (it == mQueue.begin()) {
mQueueHeadChangedCondition.signal();
}
mQueue.insert(it, item);
mQueueNotEmptyCondition.signal();
return event->eventID();
}
到此,我們的TimeEventQueue,即時間事件隊列模型講完瞭。實現機制跟handle的C/C++部分類似。
在我們setdataSource實例化Awesomeplayer的時候,我們還順帶創建瞭如下幾個event
sp<TimedEventQueue::Event> mVideoEvent;
sp<TimedEventQueue::Event> mStreamDoneEvent;
sp<TimedEventQueue::Event> mBufferingEvent;
sp<TimedEventQueue::Event> mCheckAudioStatusEvent;
sp<TimedEventQueue::Event> mVideoLagEvent;
具體都是實現瞭什麼功能呢?我們在具體調用的時候再深入講解。
接下來我們就來講講onPrepareAsyncEvent方法瞭。
frameworks/av/media/libstagefight/AwesomePlayer.cpp
void AwesomePlayer::onPrepareAsyncEvent() {
Mutex::Autolock autoLock(mLock);
…………………………
if (mUri.size() > 0) {
status_t err = finishSetDataSource_l();—-這個不會走瞭,如果是本地文件的話
…………………………
if (mVideoTrack != NULL && mVideoSource == NULL) {
status_t err = initVideoDecoder();———–如果有videotrack初始化video的解碼器
…………………………
if (mAudioTrack != NULL && mAudioSource == NULL) {
status_t err = initAudioDecoder();—————如果有audiotrack初始化audio解碼器
……………………..
modifyFlags(PREPARING_CONNECTED, SET);
if (isStreamingHTTP()) {
postBufferingEvent_l(); ——一般不會走瞭
} else {
finishAsyncPrepare_l();———-對外宣佈prepare完成,並從timeeventqueue中移除該queueitem,mAsyncPrepareEvent=null
}
}
我們終於知道prepare主要目的瞭,根據類型找到解碼器並初始化對應的解碼器。那我們首先就來看看有videotrack的媒體文件是如何找到並初始化解碼器吧。
先看圖吧,瞭解大概步驟:
看完圖就開講瞭:
iniVideoDecoder目的是初始化解碼器,取得已解碼器的聯系,解碼數據輸出格式等等。
frameworks/av/media/libstagefright/Awesomeplayer.cpp
status_t AwesomePlayer::initVideoDecoder(uint32_t flags) {
…………
mVideoSource = OMXCodec::Create(
mClient.interface(), mVideoTrack->getFormat(),
false, // createEncoder
mVideoTrack,
NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);
…………..
status_t err = mVideoSource->start();
}
我們先來看create函數到底幹瞭啥吧:
frameworks/av/media/libstagefright/OMXCodec.cpp
sp<MediaSource> OMXCodec::Create(
const sp<IOMX> &omx,
const sp<MetaData> &meta, bool createEncoder,
const sp<MediaSource> &source,
const char *matchComponentName,
uint32_t flags,
const sp<ANativeWindow> &nativeWindow) {
…………..
bool success = meta->findCString(kKeyMIMEType, &mime);
……………
(1) findMatchingCodecs(
mime, createEncoder, matchComponentName, flags,
&matchingCodecs, &matchingCodecQuirks);
……….
(2) sp<OMXCodecObserver> observer = new OMXCodecObserver;
(3) status_t err = omx->allocateNode(componentName, observer, &node);
……….
(4) sp<OMXCodec> codec = new OMXCodec(
omx, node, quirks, flags,
createEncoder, mime, componentName,
source, nativeWindow);
(5) observer->setCodec(codec);
(6)err = codec->configureCodec(meta);
…………
}
首先看下findMatchingCodecs,原來是根據mimetype找到匹配的解碼組件,android4.1的尋找組件有瞭很大的變化,以前都是把codecinfo都寫在代碼上瞭,現在把他們都放到media_codec.xml文件中,full build 後會保存在“/etc/media_codecs.xml”,這個xml由各個芯片廠商來提供,這樣以後添加起來就很方便,不用改代碼瞭。一般是原生態的代碼都是軟解碼。解碼器的匹配方式是排名制,因為一般廠商的配置文件都有很多的同類型的編碼器,誰排前面就用誰的。
frameworks/av/media/libstagefright/OMXCodec.cpp
void OMXCodec::findMatchingCodecs(
const char *mime,
bool createEncoder, const char *matchComponentName,
uint32_t flags,
Vector<String8> *matchingCodecs,
Vector<uint32_t> *matchingCodecQuirks) {
…………
const MediaCodecList *list = MediaCodecList::getInstance();
………
for (;;) {
ssize_t matchIndex =
list->findCodecByType(mime, createEncoder, index);
………………..
matchingCodecs->push(String8(componentName));
…………….
}
frameworks/av/media/libstagefright/MediaCodecList.cpp
onst MediaCodecList *MediaCodecList::getInstance() {
..
if (sCodecList == NULL) {
sCodecList = new MediaCodecList;
}
return sCodecList->initCheck() == OK ? sCodecList : NULL;
}
MediaCodecList::MediaCodecList()
: mInitCheck(NO_INIT) {
FILE *file = fopen("/etc/media_codecs.xml", "r");
if (file == NULL) {
ALOGW("unable to open media codecs configuration xml file.");
return;
}
parseXMLFile(file);
}
有瞭匹配的componentName,我們就可以創建ComponentInstance,這由allocateNode方法來實現。
frameworks/av/media/libstagefright/omx/OMX.cpp
status_t OMX::allocateNode(
const char *name, const sp<IOMXObserver> &observer, node_id *node) {
……………………
OMXNodeInstance *instance = new OMXNodeInstance(this, observer);
OMX_COMPONENTTYPE *handle;
OMX_ERRORTYPE err = mMaster->makeComponentInstance(
name, &OMXNodeInstance::kCallbacks,
instance, &handle);
……………………………
*node = makeNodeID(instance);
mDispatchers.add(*node, new CallbackDispatcher(instance));
instance->setHandle(*node, handle);
mLiveNodes.add(observer->asBinder(), instance);
observer->asBinder()->linkToDeath(this);
return OK;
}
在allocateNode,我們要用到mMaster來創建component,但是這個mMaster什麼時候初始化瞭呢?我們看下OMX的構造函數:
OMX::OMX()
: mMaster(new OMXMaster),———–原來在這呢!
mNodeCounter(0) {
}
但是我們前面沒有講到OMX什麼時候構造的啊?我們隻能往回找瞭,原來我們在初始化Awesomeplayer的時候忽略掉瞭,罪過啊:
AwesomePlayer::AwesomePlayer()
: mQueueStarted(false),
mUIDValid(false),
mTimeSource(NULL),
mVideoRendererIsPreview(false),
mAudioPlayer(NULL),
mDisplayWidth(0),
mDisplayHeight(0),
mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW),
mFlags(0),
mExtractorFlags(0),
mVideoBuffer(NULL),
mDecryptHandle(NULL),
mLastVideoTimeUs(-1),
mTextDriver(NULL) {
CHECK_EQ(mClient.connect(), (status_t)OK) 這個就是創建的地方
mClient是OMXClient,
status_t OMXClient::connect() {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder = sm->getService(String16("media.player"));
sp<IMediaPlayerService> service = interface_cast<IMediaPlayerService>(binder);—很熟悉吧,獲得BpMediaplayerservice
CHECK(service.get() != NULL);
mOMX = service->getOMX();
CHECK(mOMX.get() != NULL);
if (!mOMX->livesLocally(NULL /* node */, getpid())) {
ALOGI("Using client-side OMX mux.");
mOMX = new MuxOMX(mOMX);
}
return OK;
}
好瞭,我們直接進入mediaplayerservice.cpp看個究竟吧:
sp<IOMX> MediaPlayerService::getOMX() {
Mutex::Autolock autoLock(mLock);
if (mOMX.get() == NULL) {
mOMX = new OMX;
}
return mOMX;
}
終於看到瞭OMX的創建瞭,哎以後得註意看代碼才行!!!
我們搞瞭那麼多探究OMXMaster由來有什麼用呢?
OMXMaster::OMXMaster()
: mVendorLibHandle(NULL) {
addVendorPlugin();
addPlugin(new SoftOMXPlugin);
}
void OMXMaster::addVendorPlugin() {
addPlugin("libstagefrighthw.so");
}
原來是用來加載各個廠商的解碼器(libstagefrighthw.so),還有就是把google本身的軟解碼器(SoftOMXPlugin)也加載瞭進來。那麼這個libstagefrighthw.so在哪?我找瞭半天終於找到瞭,每個芯片廠商對應自己的libstagefrighthw
hardware/XX/media/libstagefrighthw/xxOMXPlugin
如何實例化自己解碼器的component?我們以高通為例:
void OMXMaster::addPlugin(const char *libname) {
mVendorLibHandle = dlopen(libname, RTLD_NOW);
…………………………….
if (createOMXPlugin) {
addPlugin((*createOMXPlugin)());—–創建OMXPlugin,並添加進我們的列表裡
}
}
hardware/qcom/media/libstagefrighthw/QComOMXPlugin.cpp
OMXPluginBase *createOMXPlugin() {
return new QComOMXPlugin;
}
QComOMXPlugin::QComOMXPlugin()
: mLibHandle(dlopen("libOmxCore.so", RTLD_NOW)),—-載入自己的omx API
mInit(NULL),
mDeinit(NULL),
mComponentNameEnum(NULL),
mGetHandle(NULL),
mFreeHandle(NULL),
mGetRolesOfComponentHandle(NULL) {
if (mLibHandle != NULL) {
mInit = (InitFunc)dlsym(mLibHandle, "OMX_Init");
mDeinit = (DeinitFunc)dlsym(mLibHandle, "OMX_DeInit");
mComponentNameEnum =
(ComponentNameEnumFunc)dlsym(mLibHandle, "OMX_ComponentNameEnum");
mGetHandle = (GetHandleFunc)dlsym(mLibHandle, "OMX_GetHandle");
mFreeHandle = (FreeHandleFunc)dlsym(mLibHandle, "OMX_FreeHandle");
mGetRolesOfComponentHandle =
(GetRolesOfComponentFunc)dlsym(
mLibHandle, "OMX_GetRolesOfComponent");
(*mInit)();
}
}
以上我們就可以用高通的解碼器瞭。我們在創建component的時候就可以創建高通相應的component實例瞭:
OMX_ERRORTYPE OMXMaster::makeComponentInstance(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component) {
Mutex::Autolock autoLock(mLock);
*component = NULL;
ssize_t index = mPluginByComponentName.indexOfKey(String8(name)); —-根據我們在media_codec.xml的解碼器名字,在插件列表找到其索引
OMXPluginBase *plugin = mPluginByComponentName.valueAt(index); –根據索引找到XXOMXPlugin
OMX_ERRORTYPE err =
plugin->makeComponentInstance(name, callbacks, appData, component);
—–創建組件
mPluginByInstance.add(*component, plugin);
return err;
}
hardware/qcom/media/libstagefrighthw/QComOMXPlugin.cpp
OMX_ERRORTYPE QComOMXPlugin::makeComponentInstance(
const char *name,
const OMX_CALLBACKTYPE *callbacks,
OMX_PTR appData,
OMX_COMPONENTTYPE **component) {
if (mLibHandle == NULL) {
return OMX_ErrorUndefined;
}
String8 tmp;
RemovePrefix(name, &tmp);
name = tmp.string();
return (*mGetHandle)(
reinterpret_cast<OMX_HANDLETYPE *>(component),
const_cast<char *>(name),
appData, const_cast<OMX_CALLBACKTYPE *>(callbacks));
}
哈哈,我們終於完成瞭app到尋找到正確解碼器的工程瞭!!!
ComponentInstance, OMXCodecObserver,omxcodec,omx的關系和聯系,我寫瞭篇文章,可以到鏈接進去看看:
http://blog.csdn.net/tjy1985/article/details/7397752
OMXcodec通過binder(IOMX)跟omx建立瞭聯系,解碼器則通過註冊的幾個回調事件OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks = {
&OnEvent, &OnEmptyBufferDone, &OnFillBufferDone
}往OMXNodeInstance這個接口上報消息,OMX通過消息分發機制往OMXCodecObserver發消息,它再給註冊進observer的omxcodec(observer->setCodec(codec);)進行最後的處理!
stagefright 通過OpenOMX聯通解碼器的過程至此完畢。
create最後一步就剩下configureCodec(meta),主要是設置下輸出的寬高和initNativeWindow。
忘瞭個事,就是OMXCOdec的狀態:
enum State {
DEAD,
LOADED,
LOADED_TO_IDLE,
IDLE_TO_EXECUTING,
EXECUTING,
EXECUTING_TO_IDLE,
IDLE_TO_LOADED,
RECONFIGURING,
ERROR
};
在我們實例化omxcodec的時候該狀態處於LOADED狀態。
LOADER後應該就是LOADER_TO_IDLE,那什麼時候進入該狀態呢,就是我們下面講的start方法:
status_t err = mVideoSource->start();
mVideoSource就是omxcodec,我們進入omxcodec.cpp探個究竟:
status_t OMXCodec::start(MetaData *meta) {
….
return init();
}
status_t OMXCodec::init() {
……..
err = allocateBuffers();
err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
setState(LOADED_TO_IDLE);
……………………
}
start原來做瞭三件事啊,
1:allocateBuffers給輸入端放入緩存的數據,給輸出端準備匹配的native window
status_t OMXCodec::allocateBuffers() {
status_t err = allocateBuffersOnPort(kPortIndexInput);
if (err != OK) {
return err;
}
return allocateBuffersOnPort(kPortIndexOutput);
}
2:分配完後通知解碼器器端進入idle狀態,sendCommand的流程可以參考 emptyBuffer過程
3:本身也處於IDLE。
到此我們的initVideoDecoder就完成瞭,initAudioDecoder流程也差不多一致,這裡就不介紹瞭,有興趣的可以自己跟進去看看。
prepare的最後一步finishAsyncPrepare_l(),對外宣佈prepare完成,並從timeeventqueue中移除該queueitem,mAsyncPrepareEvent=null。
費瞭很多的口舌和時間,我們終於完成瞭prepare的過程,各路信息通道都打開瞭,往下就是播放的過程瞭。