AwesomePlayer 源代碼分析 – Android移動開發技術文章_手機開發 Android移動開發教學課程

 1,AwesomeEvent 這個是同步相應的事件而做的一個類,跟framework層的looper和handler作用相似,player有一些異步操作比如解析文件,這些操作比較耗時,做異步操作然後做回調會有更好的用戶體驗
struct AwesomeEvent : public TimedEventQueue::Event
    繼承自TimedEventQueue::Event  ,關於c++的內部類可以參考以前的blog,TimedEventQueue會起一個thread,來做event的post操作

 
  2, AwesomeRemoteRenderer
   AwesomeLocalRenderer
 這兩個類是往surface上post數據,而remote據說是因為omx node的使用,還沒有細研究,而localrenderer會根據有無硬件加速調用不同的方法

 
3 構造函數會初始化event,這些後面再說
mediaplayer的一般方法流程是
setdatasource
prepareAsync
start
現在隻針對filesource做流程分析
 
 setdatasource 一般會設置file的uri
mUri = uri;現在隻是設置一下uri,下面有段註釋
    // The actual work will be done during preparation in the call to
    // ::finishSetDataSource_l to avoid blocking the calling thread in
    // setDataSource for any significant time.
現在不會做其他的操作,因為會消耗太多的時間有可能導致上層的ANR,這個操作是同步操作
重頭戲就在prepareAsync裡面瞭,看這個函數
    if (!mQueueStarted) {
        mQueue.start();
        mQueueStarted = true;
    }//會把eventqueue打開,為以後的event post做準備,這個時候會建立一個線程
    mAsyncPrepareEvent = new AwesomeEvent(
            this, &AwesomePlayer::onPrepareAsyncEvent);
    mQueue.postEvent(mAsyncPrepareEvent);
//為瞭能異步,postevent

 
到   onPrepareAsyncEvent()
 if (mUri.size() > 0) {
        status_t err = finishSetDataSource_l();
        if (err != OK) {
            abortPrepare(err);
            return;
        }
    }

 
finishSetDataSource_l();//這個就是上面註釋的那個函數 這個函數會根據source類型建立不同的extractor
針對filesource,實際就是這個dataSource = DataSource::CreateFromURI(mUri.string(), &mUriHeaders);//uriheaders為null,不用管它
接著跳轉 到source = new FileSource(uri+7);建立filesource,它會註冊不同的extractor,sniffer出不同的mimetype,操作filesource
接著回到  finishSetDataSource_l(),
sp<MediaExtractor> extractor = MediaExtractor::Create(dataSource,mime);
這個時候datasource會sniffer出不同的extractor,從而根據mime type建立不同的extractor
最後就是setdatasource這個本來應該第一步做的操作
這個時候會解析出audio track和viedo track,為下面的codec做準備

    if (mVideoTrack != NULL && mVideoSource == NULL) {
        // Use NPT timestamps if playing
        // RTSP streaming with video only content
        // (no audio to drive the clock for media time)
        uint32_t flags = 0;
        if (mRTSPController != NULL && mAudioTrack == NULL) {
            flags |= OMXCodec::kUseNptTimestamp;
        }
        status_t err = initVideoDecoder(flags);
        if (err != OK) {
            abortPrepare(err);
            return;
        }
    }
    if (mAudioTrack != NULL && mAudioSource == NULL) {
        status_t err = initAudioDecoder();
        if (err != OK) {
            abortPrepare(err);
            return;
        }
    }
這個就是涉及最關鍵的部分瞭 codec部分
從上面下來 audiotrack 和videotrack都已經有瞭 這個時候就要初始化codec瞭
initVideoDecoder(flags);
initAudioDecoder();

{
    mVideoSource = OMXCodec::Create(
            mClient.interface(), mVideoTrack->getFormat(),
            false, // createEncoder
            mVideoTrack,
            NULL, flags);
OMXCodec是個很關鍵的類,這個就到瞭前幾天寫的stagefright的code裡面瞭,我會繼續寫那個,不在這裡說瞭,總之會返回一個Vide​osource,這個是根據videotrack解碼過的數據
    if (mVideoSource != NULL) {
        int64_t durationUs;
        if (mVideoTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
            Mutex::Autolock autoLock(mMiscStateLock);
            if (mDurationUs < 0 || durationUs > mDurationUs) {
                mDurationUs = durationUs;
            }
        }
        CHECK(mVideoTrack->getFormat()->findInt32(kKeyWidth, &mVideoWidth));
        CHECK(mVideoTrack->getFormat()->findInt32(kKeyHeight, &mVideoHeight));
        status_t err = mVideoSource->start(); //這個時候videosource和viedotrack都會準備好,申請buffer
        if (err != OK) {
            mVideoSource.clear();
            return err;
        }
    }
    return mVideoSource != NULL ? OK : UNKNOWN_ERROR;
}
​弄好以後就會更改狀態,做上層的prepareAsync的回調
數據已經準備瞭,
play_l()
​ if ((mVideoSource != NULL) && (!mVideoBuffer))
{
    // Changes to fix Audio starts playing before video.
    // First video frame is returned late as it is referenced to decode subsequent P and B frames.
    // For higher resolutions (e.g. 1080p) this returning time is significant.
    // We need to trigger Video decoder earlier than audio so that Video catch up with audio in time.
        MediaSource::ReadOptions options;
        if (mSeeking) {
            LOGV("seeking to %lld us (%.2f secs)", mSeekTimeUs, mSeekTimeUs / 1E6);
            options.setSeekTo(
                    mSeekTimeUs, MediaSource::ReadOptions::SEEK_CLOSEST_SYNC);
        }
        for (;;) {
            status_t err = mVideoSource->read(&mVideoBuffer, &options);
            options.clearSeekTo();
            if (err != OK) {
                CHECK_EQ(mVideoBuffer, NULL);
                if (err == INFO_FORMAT_CHANGED) {
                    LOGV("VideoSource signalled format change.");
                    if (mVideoRenderer != NULL) {
                        mVideoRendererIsPreview = false;
                        initRenderer_l();
                    }
                    continue;
                }
                break;
            }
 
            break;
        }
    }
    if (mVideoSource != NULL) {
        // Kick off video playback
        postVideoEvent_l();
    }
就會把得到的mVideoBuffer render到surface上面
而audio會調用audioplayer
 mAudioPlayer = new AudioPlayer(mAudioSink, this);
                mAudioPlayer->setSource(mAudioSource);
                // We've already started the MediaSource in order to enable
                // the prefetcher to read its data.
                status_t err = mAudioPlayer->start(
                        true /* sourceAlreadyStarted */);
還有音視頻同步,seek的問題,下次再寫

摘自 shcalm的專欄

發佈留言