Android Audio代碼分析10 – audio_track_cblk_t::framesReady函數 – Android移動開發技術文章_手機開發 Android移動開發教學課程

在看AudioTrack的write函數的時候,瞭解到,音頻數據最終都寫到瞭audio_track_cblk_t的結構體中。
這個結構體是在AudioFlinger中創建的。
AudioFlinger是如何來使用這些數據的呢?
今天就來學習學習。


我們寫數據的時候,調用瞭audio_track_cblk_t::framesAvailable_l函數,來判斷是否有可用的空間,以供寫用。
類audio_track_cblk_t中還有另外一個函數framesReady,看名字,應該是告訴我們已經準備好瞭多少東東。
看樣子,AudioFlinger在使用音頻數據的時候,應該是先調用瞭framesReady函數,來看看我們已經寫進去多少音頻數據瞭,然後再使用這些數據。


*****************************************源碼*************************************************
uint32_t audio_track_cblk_t::framesReady()
{
    uint64_t u = this->user;
    uint64_t s = this->server;


    if (flags & CBLK_DIRECTION_MSK) {
        if (u < loopEnd) {
            return u – s;
        } else {
            Mutex::Autolock _l(lock);
            if (loopCount >= 0) {
                return (loopEnd – loopStart)*loopCount + u – s;
            } else {
                return UINT_MAX;
            }
        }
    } else {
        return s – u;
    }
}
**********************************************************************************************
源碼路徑:
frameworks\base\media\libmedia\AudioTrack.cpp


#######################說明################################
之前看代碼都是順藤摸瓜,今天要順瓜摸藤瞭。
看看什麼地方調用瞭函數framesReady。
搜瞭一下,調用的地方還真不少。
不過,很多地方隻是用調用結果來作判斷,
隻有在函數AudioFlinger::PlaybackThread::Track::getNextBuffer中,把返回值保存瞭下來。
以前看代碼知道,寫數據的時候,framesAvailable返回值是被保存起來,並使用的。
類推一下,讀數據的時候,framesReady的返回值也應該被保存使用。
於是就來到瞭函數AudioFlinger::PlaybackThread::Track::getNextBuffer中:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
{
     audio_track_cblk_t* cblk = this->cblk();
     uint32_t framesReady;
     uint32_t framesReq = buffer->frameCount;


     // Check if last stepServer failed, try to step now
     if (mFlags & TrackBase::STEPSERVER_FAILED) {
         if (!step())  goto getNextBuffer_exit;
         LOGV("stepServer recovered");
         mFlags &= ~TrackBase::STEPSERVER_FAILED;
     }


     framesReady = cblk->framesReady();


     if (LIKELY(framesReady)) {
        uint64_t s = cblk->server;
        uint64_t bufferEnd = cblk->serverBase + cblk->frameCount;


        bufferEnd = (cblk->loopEnd < bufferEnd) ? cblk->loopEnd : bufferEnd;
        if (framesReq > framesReady) {
            framesReq = framesReady;
        }
        if (s + framesReq > bufferEnd) {
            framesReq = bufferEnd – s;
        }


         buffer->raw = getBuffer(s, framesReq);
         if (buffer->raw == 0) goto getNextBuffer_exit;
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void* AudioFlinger::ThreadBase::TrackBase::getBuffer(uint32_t offset, uint32_t frames) const {
    audio_track_cblk_t* cblk = this->cblk();
    int8_t *bufferStart = (int8_t *)mBuffer + (offset-cblk->serverBase)*cblk->frameSize;
    int8_t *bufferEnd = bufferStart + frames * cblk->frameSize;


    // Check validity of returned pointer in case the track control block would have been corrupted.
    if (bufferStart < mBuffer || bufferStart > bufferEnd || bufferEnd > mBufferEnd ||
        ((unsigned long)bufferStart & (unsigned long)(cblk->frameSize – 1))) {
        LOGE("TrackBase::getBuffer buffer out of range:\n    start: %p, end %p , mBuffer %p mBufferEnd %p\n    \
                server %lld, serverBase %lld, user %lld, userBase %lld, channelCount %d",
                bufferStart, bufferEnd, mBuffer, mBufferEnd,
                cblk->server, cblk->serverBase, cblk->user, cblk->userBase, cblk->channelCount);
        return 0;
    }


    return bufferStart;
}
—————————————————————-


         buffer->frameCount = framesReq;
        return NO_ERROR;
     }


getNextBuffer_exit:
     buffer->raw = 0;
     buffer->frameCount = 0;
     LOGV("getNextBuffer() no more data for track %d on thread %p", mName, mThread.unsafe_get());
     return NOT_ENOUGH_DATA;
}
—————————————————————-


下面看看哪個地方調用瞭函數AudioFlinger::PlaybackThread::Track::getNextBuffer。
搜索發現,AudioFlinger中隻有函數AudioFlinger::DirectOutputThread::threadLoop調用瞭,
而我們說的AudioTrack是播放音樂用的,所以,肯定不是這兒。


另外,發現AudioMixer中有幾個地方調用瞭函數getNextBuffer,不過調用代碼如下:
t.bufferProvider->getNextBuffer(&t.buffer);
bufferProvider和AudioFlinger::PlaybackThread::Track又存在怎樣的關系呢?


bufferProvider的定義代碼:AudioBufferProvider*                bufferProvider;
存在以下繼承關系:
class Track : public TrackBase
class TrackBase : public AudioBufferProvider, public RefBase


可見,bufferProvider最終指向的是AudioFlinger::PlaybackThread::Track對象,
所以,t.bufferProvider->getNextBuffer(&t.buffer)其實是調用的函數:AudioFlinger::PlaybackThread::Track::getNextBuffer


函數AudioFlinger::MixerThread::prepareTracks_l調用瞭函數AudioMixer::setBufferProvider。
函數AudioMixer::setBufferProvider中給bufferProvider進行賦值。


函數AudioFlinger::MixerThread::threadLoop和函數AudioFlinger::DuplicatingThread::threadLoop中
有調用函數AudioFlinger::MixerThread::prepareTracks_l。


先看看AudioMixer中調用函數AudioFlinger::PlaybackThread::Track::getNextBuffer的地方吧。
AudioMixer中的以下幾個函數都調用瞭函數getNextBuffer:
process__nop函數
process__genericNoResampling函數
process__genericResampling函數
process__OneTrack16BitsStereoNoResampling函數
process__TwoTracks16BitsStereoNoResampling函數


這兒取其中的process__OneTrack16BitsStereoNoResampling函數來看看。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// one track, 16 bits stereo without resampling is the most common case
void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state)
{
    const int i = 31 – __builtin_clz(state->enabledTracks);
    const track_t& t = state->tracks[i];


    AudioBufferProvider::Buffer& b(t.buffer);


    int32_t* out = t.mainBuffer;
    size_t numFrames = state->frameCount;


    const int16_t vl = t.volume[0];
    const int16_t vr = t.volume[1];
    const uint32_t vrl = t.volumeRL;
    while (numFrames) {
        b.frameCount = numFrames;
        t.bufferProvider->getNextBuffer(&b);
        int16_t const *in = b.i16;


        // in == NULL can happen if the track was flushed just after having
        // been enabled for mixing.
        if (in == NULL || ((unsigned long)in & 3)) {
            memset(out, 0, numFrames*MAX_NUM_CHANNELS*sizeof(int16_t));
            LOGE_IF(((unsigned long)in & 3), "process stereo track: input buffer alignment pb: buffer %p track %d, channels %d, needs %08x",
                    in, i, t.channelCount, t.needs);
            return;
        }
        size_t outFrames = b.frameCount;


        if (UNLIKELY(uint32_t(vl) > UNITY_GAIN || uint32_t(vr) > UNITY_GAIN)) {
            // volume is boosted, so we might need to clamp even though
            // we process only one track.
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                // clamping…
                l = clamp16(l);
                r = clamp16(r);
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (–outFrames);
        } else {
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (–outFrames);
        }
        numFrames -= b.frameCount;
        t.bufferProvider->releaseBuffer(&b);
    }
}
—————————————————————-
可見,函數的主要功能是將數據從audio_track_cblk_t中copy到track_t的mainBuffer中。
track_t是個什麼東東呢?是state_t的一個成員。
state_t又是個什麼東東呢?是傳入的函數參數。


現在有兩個問題需要探討:
1、process__OneTrack16BitsStereoNoResampling是如何被調用的?以及傳入的參數是從哪兒來的?
2、放入track_t的mainBuffer中的數據,是如何被使用的?


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
先看看process__OneTrack16BitsStereoNoResampling是怎麼被調用的。
函數AudioMixer::process__validate中有使用process__OneTrack16BitsStereoNoResampling函數。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioMixer::process__validate(state_t* state)
{
    LOGW_IF(!state->needsChanged,
        "in process__validate() but nothing's invalid");


    uint32_t changed = state->needsChanged;
    state->needsChanged = 0; // clear the validation flag


    // recompute which tracks are enabled / disabled
    uint32_t enabled = 0;
    uint32_t disabled = 0;
    while (changed) {
        const int i = 31 – __builtin_clz(changed);
        const uint32_t mask = 1<<i;
        changed &= ~mask;
        track_t& t = state->tracks[i];
        (t.enabled ? enabled : disabled) |= mask;
    }
    state->enabledTracks &= ~disabled;
    state->enabledTracks |=  enabled;


    // compute everything we need…
    int countActiveTracks = 0;
    int all16BitsStereoNoResample = 1;
    int resampling = 0;
    int volumeRamp = 0;
    uint32_t en = state->enabledTracks;
    while (en) {
        const int i = 31 – __builtin_clz(en);
        en &= ~(1<<i);


        countActiveTracks++;
        track_t& t = state->tracks[i];
        uint32_t n = 0;
        n |= NEEDS_CHANNEL_1 + t.channelCount – 1;
        n |= NEEDS_FORMAT_16;
        n |= t.doesResample() ? NEEDS_RESAMPLE_ENABLED : NEEDS_RESAMPLE_DISABLED;
        if (t.auxLevel != 0 && t.auxBuffer != NULL) {
            n |= NEEDS_AUX_ENABLED;
        }


        if (t.volumeInc[0]|t.volumeInc[1]) {
            volumeRamp = 1;
        } else if (!t.doesResample() && t.volumeRL == 0) {
            n |= NEEDS_MUTE_ENABLED;
        }
        t.needs = n;


        if ((n & NEEDS_MUTE__MASK) == NEEDS_MUTE_ENABLED) {
            t.hook = track__nop;
        } else {
            if ((n & NEEDS_AUX__MASK) == NEEDS_AUX_ENABLED) {
                all16BitsStereoNoResample = 0;
            }
            if ((n & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
                all16BitsStereoNoResample = 0;
                resampling = 1;
                t.hook = track__genericResample;
            } else {
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_1){
                    t.hook = track__16BitsMono;
                    all16BitsStereoNoResample = 0;
                }
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_2){
                    t.hook = track__16BitsStereo;
                }
            }
        }
    }


    // select the processing hooks
    state->hook = process__nop;
    if (countActiveTracks) {
        if (resampling) {
            if (!state->outputTemp) {
                state->outputTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            if (!state->resampleTemp) {
                state->resampleTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            state->hook = process__genericResampling;
        } else {
            if (state->outputTemp) {
                delete [] state->outputTemp;
                state->outputTemp = 0;
            }
            if (state->resampleTemp) {
                delete [] state->resampleTemp;
                state->resampleTemp = 0;
            }
            state->hook = process__genericNoResampling;
            if (all16BitsStereoNoResample && !volumeRamp) {
                if (countActiveTracks == 1) {
                    state->hook = process__OneTrack16BitsStereoNoResampling;
                }
            }
        }
    }


    LOGV("mixer configuration change: %d activeTracks (%08x) "
        "all16BitsStereoNoResample=%d, resampling=%d, volumeRamp=%d",
        countActiveTracks, state->enabledTracks,
        all16BitsStereoNoResample, resampling, volumeRamp);


   state->hook(state);


   // Now that the volume ramp has been done, set optimal state and
   // track hooks for subsequent mixer process
   if (countActiveTracks) {
       int allMuted = 1;
       uint32_t en = state->enabledTracks;
       while (en) {
           const int i = 31 – __builtin_clz(en);
           en &= ~(1<<i);
           track_t& t = state->tracks[i];
           if (!t.doesResample() && t.volumeRL == 0)
           {
               t.needs |= NEEDS_MUTE_ENABLED;
               t.hook = track__nop;
           } else {
               allMuted = 0;
           }
       }
       if (allMuted) {
           state->hook = process__nop;
       } else if (all16BitsStereoNoResample) {
           if (countActiveTracks == 1) {
              state->hook = process__OneTrack16BitsStereoNoResampling;
           }
       }
   }
}
—————————————————————-
把其賦值給瞭state的hook。
state是傳入的參數。hook是函數指針。


又引出來兩個問題:
1、既然hook是函數指針,那麼它何時被調用的?
2、函數process__validate有是何時被調用的?


第一個問題比較簡單,函數process__validate中調用瞭hook。


來看看哪兒有使用函數process__validate。
函數AudioMixer::invalidateState中將process__validate賦值給瞭mState的hook。
mState是AudioMixer的成員變量。
hook是個函數指針。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 void AudioMixer::invalidateState(uint32_t mask)
 {
    if (mask) {
        mState.needsChanged |= mask;
        mState.hook = process__validate;
    }
 }
—————————————————————-


與上面類似,又出來兩個問題:
1、mState的hook被調用的地方。
2、函數invalidateState被調用的地方。


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
先看第一個問題。
函數AudioMixer::process中調用瞭mState的hook。
void AudioMixer::process()
{
    mState.hook(&mState);
}
可見,函數process__OneTrack16BitsStereoNoResampling的參數state就是此處的mState:
mState.hook(&mState);
mState.hook = process__validate;
state->hook = process__OneTrack16BitsStereoNoResampling;
state->hook(state);
—————————————————————-
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
再看第二個問題:函數invalidateState被調用的地方。
函數invalidateState被調用的地方還不是,不過,我們隻關心以下兩個函數:
AudioMixer::enable
AudioMixer::setParameter
這兩個函數中函數AudioFlinger::MixerThread::prepareTracks_l中都有沒調到。
而函數AudioFlinger::MixerThread::prepareTracks_l在AudioFlinger::MixerThread::threadLoop中被調到。


一個一個來。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::enable(int name)
{
    switch (name) {
        case MIXING: {
            if (mState.tracks[ mActiveTrack ].enabled != 1) {
                mState.tracks[ mActiveTrack ].enabled = 1;
                LOGV("enable(%d)", mActiveTrack);
                invalidateState(1<<mActiveTrack);
            }
        } break;
        default:
            return NAME_NOT_FOUND;
    }
    return NO_ERROR;
}
—————————————————————-
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setParameter(int target, int name, void *value)
{
    int valueInt = (int)value;
    int32_t *valueBuf = (int32_t *)value;


    switch (target) {
    case TRACK:
        if (name == CHANNEL_COUNT) {
            if ((uint32_t(valueInt) <= MAX_NUM_CHANNELS) && (valueInt)) {
                if (mState.tracks[ mActiveTrack ].channelCount != valueInt) {
                    mState.tracks[ mActiveTrack ].channelCount = valueInt;
                    LOGV("setParameter(TRACK, CHANNEL_COUNT, %d)", valueInt);
                    invalidateState(1<<mActiveTrack);
                }
                return NO_ERROR;
            }
        }
        if (name == MAIN_BUFFER) {
            if (mState.tracks[ mActiveTrack ].mainBuffer != valueBuf) {
                mState.tracks[ mActiveTrack ].mainBuffer = valueBuf;
                LOGV("setParameter(TRACK, MAIN_BUFFER, %p)", valueBuf);
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }
        if (name == AUX_BUFFER) {
            if (mState.tracks[ mActiveTrack ].auxBuffer != valueBuf) {
                mState.tracks[ mActiveTrack ].auxBuffer = valueBuf;
                LOGV("setParameter(TRACK, AUX_BUFFER, %p)", valueBuf);
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }


        break;
    case RESAMPLE:
        if (name == SAMPLE_RATE) {
            if (valueInt > 0) {
                track_t& track = mState.tracks[ mActiveTrack ];
                if (track.setResampler(uint32_t(valueInt), mSampleRate)) {
                    LOGV("setParameter(RESAMPLE, SAMPLE_RATE, %u)",
                            uint32_t(valueInt));
                    invalidateState(1<<mActiveTrack);
                }
                return NO_ERROR;
            }
        }
        break;
    case RAMP_VOLUME:
    case VOLUME:
        if ((uint32_t(name-VOLUME0) < MAX_NUM_CHANNELS)) {
            track_t& track = mState.tracks[ mActiveTrack ];
            if (track.volume[name-VOLUME0] != valueInt) {
                LOGV("setParameter(VOLUME, VOLUME0/1: %04x)", valueInt);
                track.prevVolume[name-VOLUME0] = track.volume[name-VOLUME0] << 16;
                track.volume[name-VOLUME0] = valueInt;
                if (target == VOLUME) {
                    track.prevVolume[name-VOLUME0] = valueInt << 16;
                    track.volumeInc[name-VOLUME0] = 0;
                } else {
                    int32_t d = (valueInt<<16) – track.prevVolume[name-VOLUME0];
                    int32_t volInc = d / int32_t(mState.frameCount);
                    track.volumeInc[name-VOLUME0] = volInc;
                    if (volInc == 0) {
                        track.prevVolume[name-VOLUME0] = valueInt << 16;
                    }
                }
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        } else if (name == AUXLEVEL) {
            track_t& track = mState.tracks[ mActiveTrack ];
            if (track.auxLevel != valueInt) {
                LOGV("setParameter(VOLUME, AUXLEVEL: %04x)", valueInt);
                track.prevAuxLevel = track.auxLevel << 16;
                track.auxLevel = valueInt;
                if (target == VOLUME) {
                    track.prevAuxLevel = valueInt << 16;
                    track.auxInc = 0;
                } else {
                    int32_t d = (valueInt<<16) – track.prevAuxLevel;
                    int32_t volInc = d / int32_t(mState.frameCount);
                    track.auxInc = volInc;
                    if (volInc == 0) {
                        track.prevAuxLevel = valueInt << 16;
                    }
                }
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }
        break;
    }
    return BAD_VALUE;
}
—————————————————————-
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// prepareTracks_l() must be called with ThreadBase::mLock held
uint32_t AudioFlinger::MixerThread::prepareTracks_l(const SortedVector< wp<Track> >& activeTracks, Vector< sp<Track> > *tracksToRemove)
{


    uint32_t mixerStatus = MIXER_IDLE;
    // find out which tracks need to be processed
    size_t count = activeTracks.size();
    size_t mixedTracks = 0;
    size_t tracksWithEffect = 0;


    float masterVolume = mMasterVolume;
    bool  masterMute = mMasterMute;


    if (masterMute) {
        masterVolume = 0;
    }
#ifdef LVMX
    bool tracksConnectedChanged = false;
    bool stateChanged = false;


    int audioOutputType = LifeVibes::getMixerType(mId, mType);
    if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType))
    {
        int activeTypes = 0;
        for (size_t i=0 ; i<count ; i++) {
            sp<Track> t = activeTracks[i].promote();
            if (t == 0) continue;
            Track* const track = t.get();
            int iTracktype=track->type();
            activeTypes |= 1<<track->type();
        }
        LifeVibes::computeVolumes(audioOutputType, activeTypes, tracksConnectedChanged, stateChanged, masterVolume, masterMute);
    }
#endif
    // Delegate master volume control to effect in output mix effect chain if needed
    sp<EffectChain> chain = getEffectChain_l(AudioSystem::SESSION_OUTPUT_MIX);
    if (chain != 0) {
        uint32_t v = (uint32_t)(masterVolume * (1 << 24));
        chain->setVolume_l(&v, &v);
        masterVolume = (float)((v + (1 << 23)) >> 24);
        chain.clear();
    }


    for (size_t i=0 ; i<count ; i++) {
        sp<Track> t = activeTracks[i].promote();
        if (t == 0) continue;


        Track* const track = t.get();
        audio_track_cblk_t* cblk = track->cblk();


        // The first time a track is added we wait
        // for all its buffers to be filled before processing it
        mAudioMixer->setActiveTrack(track->name());
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setActiveTrack(int track)
{
    if (uint32_t(track-TRACK0) >= MAX_NUM_TRACKS) {
        return BAD_VALUE;
    }
    mActiveTrack = track – TRACK0;
    return NO_ERROR;
}
—————————————————————-
        if (cblk->framesReady() && track->isReady() &&
                !track->isPaused() && !track->isTerminated())
        {
            //LOGV("track %d u=%08x, s=%08x [OK] on thread %p", track->name(), cblk->user, cblk->server, this);


            mixedTracks++;


            // track->mainBuffer() != mMixBuffer means there is an effect chain
            // connected to the track
            chain.clear();
            if (track->mainBuffer() != mMixBuffer) {
                chain = getEffectChain_l(track->sessionId());
                // Delegate volume control to effect in track effect chain if needed
                if (chain != 0) {
                    tracksWithEffect++;
                } else {
                    LOGW("prepareTracks_l(): track %08x attached to effect but no chain found on session %d",
                            track->name(), track->sessionId());
                }
            }

 


            int param = AudioMixer::VOLUME;
            if (track->mFillingUpStatus == Track::FS_FILLED) {
                // no ramp for the first volume setting
                track->mFillingUpStatus = Track::FS_ACTIVE;
                if (track->mState == TrackBase::RESUMING) {
                    track->mState = TrackBase::ACTIVE;
                    param = AudioMixer::RAMP_VOLUME;
                }
            } else if (cblk->server != 0) {
                // If the track is stopped before the first frame was mixed,
                // do not apply ramp
                param = AudioMixer::RAMP_VOLUME;
            }


            // compute volume for this track
            uint32_t vl, vr, va;
            if (track->isMuted() || track->isPausing() ||
                mStreamTypes[track->type()].mute) {
                vl = vr = va = 0;
                if (track->isPausing()) {
                    track->setPaused();
                }
            } else {


                // read original volumes with volume control
                float typeVolume = mStreamTypes[track->type()].volume;
#ifdef LVMX
                bool streamMute=false;
                // read the volume from the LivesVibes audio engine.
                if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType))
                {
                    LifeVibes::getStreamVolumes(audioOutputType, track->type(), &typeVolume, &streamMute);
                    if (streamMute) {
                        typeVolume = 0;
                    }
                }
#endif
                float v = masterVolume * typeVolume;
                vl = (uint32_t)(v * cblk->volume[0]) << 12;
                vr = (uint32_t)(v * cblk->volume[1]) << 12;


                va = (uint32_t)(v * cblk->sendLevel);
            }
            // Delegate volume control to effect in track effect chain if needed
            if (chain != 0 && chain->setVolume_l(&vl, &vr)) {
                // Do not ramp volume if volume is controlled by effect
                param = AudioMixer::VOLUME;
                track->mHasVolumeController = true;
            } else {
                // force no volume ramp when volume controller was just disabled or removed
                // from effect chain to avoid volume spike
                if (track->mHasVolumeController) {
                    param = AudioMixer::VOLUME;
                }
                track->mHasVolumeController = false;
            }


            // Convert volumes from 8.24 to 4.12 format
            int16_t left, right, aux;
            uint32_t v_clamped = (vl + (1 << 11)) >> 12;
            if (v_clamped > MAX_GAIN_INT) v_clamped = MAX_GAIN_INT;
            left = int16_t(v_clamped);
            v_clamped = (vr + (1 << 11)) >> 12;
            if (v_clamped > MAX_GAIN_INT) v_clamped = MAX_GAIN_INT;
            right = int16_t(v_clamped);


            if (va > MAX_GAIN_INT) va = MAX_GAIN_INT;
            aux = int16_t(va);


#ifdef LVMX
            if ( tracksConnectedChanged || stateChanged )
            {
                 // only do the ramp when the volume is changed by the user / application
                 param = AudioMixer::VOLUME;
            }
#endif


            // XXX: these things DON'T need to be done each time
// 前面有說到,調用getNextBuffer函數的代碼為:t.bufferProvider->getNextBuffer(&b);
// 猜到沒錯的話,此處應該就是給bufferProvider賦值的地方
            mAudioMixer->setBufferProvider(track);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setBufferProvider(AudioBufferProvider* buffer)
{
    mState.tracks[ mActiveTrack ].bufferProvider = buffer;
    return NO_ERROR;
}
—————————————————————-
// 這個就是前面說的AudioMixer::enable函數
            mAudioMixer->enable(AudioMixer::MIXING);


// 這個是前面說的AudioMixer::setParameter函數
            mAudioMixer->setParameter(param, AudioMixer::VOLUME0, (void *)left);
            mAudioMixer->setParameter(param, AudioMixer::VOLUME1, (void *)right);
            mAudioMixer->setParameter(param, AudioMixer::AUXLEVEL, (void *)aux);
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::FORMAT, (void *)track->format());
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::CHANNEL_COUNT, (void *)track->channelCount());
            mAudioMixer->setParameter(
                AudioMixer::RESAMPLE,
                AudioMixer::SAMPLE_RATE,
                (void *)(cblk->sampleRate));
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::AUX_BUFFER, (void *)track->auxBuffer());


            // reset retry count
            track->mRetryCount = kMaxTrackRetries;
            mixerStatus = MIXER_TRACKS_READY;
        } else {
            //LOGV("track %d u=%08x, s=%08x [NOT READY] on thread %p", track->name(), cblk->user, cblk->server, this);
            if (track->isStopped()) {
                track->reset();
            }
            if (track->isTerminated() || track->isStopped() || track->isPaused()) {
                // We have consumed all the buffers of this track.
                // Remove it from the list of active tracks.
                tracksToRemove->add(track);
            } else {
                // No buffers for this track. Give it a few chances to
                // fill a buffer, then remove it from active list.
// 如果track中沒有數據,給它個機會往裡填數據。
// 在mRetryCount次之後還沒數據,就把它從active list中咔嚓掉。
                if (–(track->mRetryCount) <= 0) {
                    LOGV("BUFFER TIMEOUT: remove(%d) from active list on thread %p", track->name(), this);
                    tracksToRemove->add(track);
                    // indicate to client process that the track was disabled because of underrun
                    cblk->flags |= CBLK_DISABLED_ON;
                } else if (mixerStatus != MIXER_TRACKS_READY) {
                    mixerStatus = MIXER_TRACKS_ENABLED;
                }
            }
            mAudioMixer->disable(AudioMixer::MIXING);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::disable(int name)
{
    switch (name) {
        case MIXING: {
            if (mState.tracks[ mActiveTrack ].enabled != 0) {
                mState.tracks[ mActiveTrack ].enabled = 0;
                LOGV("disable(%d)", mActiveTrack);
                invalidateState(1<<mActiveTrack);
            }
        } break;
        default:
            return NAME_NOT_FOUND;
    }
    return NO_ERROR;
}
此處也有對函數disable的調用。
—————————————————————-
        }
    }


// 幹掉那些需要被幹掉的track
    // remove all the tracks that need to be…
    count = tracksToRemove->size();
    if (UNLIKELY(count)) {
        for (size_t i=0 ; i<count ; i++) {
            const sp<Track>& track = tracksToRemove->itemAt(i);
            mActiveTracks.remove(track);
            if (track->mainBuffer() != mMixBuffer) {
                chain = getEffectChain_l(track->sessionId());
                if (chain != 0) {
                    LOGV("stopping track on chain %p for session Id: %d", chain.get(), track->sessionId());
                    chain->stopTrack();
                }
            }
            if (track->isTerminated()) {
                mTracks.remove(track);
                deleteTrackName_l(track->mName);
            }
        }
    }


    // mix buffer must be cleared if all tracks are connected to an
    // effect chain as in this case the mixer will not write to
    // mix buffer and track effects will accumulate into it
    if (mixedTracks != 0 && mixedTracks == tracksWithEffect) {
        memset(mMixBuffer, 0, mFrameCount * mChannelCount * sizeof(int16_t));
    }


    return mixerStatus;
}
—————————————————————-
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
bool AudioFlinger::MixerThread::threadLoop()
{
    Vector< sp<Track> > tracksToRemove;
    uint32_t mixerStatus = MIXER_IDLE;
    nsecs_t standbyTime = systemTime();
    size_t mixBufferSize = mFrameCount * mFrameSize;
    // FIXME: Relaxed timing because of a certain device that can't meet latency
    // Should be reduced to 2x after the vendor fixes the driver issue
    nsecs_t maxPeriod = seconds(mFrameCount) / mSampleRate * 3;
    nsecs_t lastWarning = 0;
    bool longStandbyExit = false;
    uint32_t activeSleepTime = activeSleepTimeUs();
    uint32_t idleSleepTime = idleSleepTimeUs();
    uint32_t sleepTime = idleSleepTime;
    Vector< sp<EffectChain> > effectChains;


    while (!exitPending())
    {
        processConfigEvents();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioFlinger::ThreadBase::processConfigEvents()
{
    mLock.lock();
    while(!mConfigEvents.isEmpty()) {
        LOGV("processConfigEvents() remaining events %d", mConfigEvents.size());
        ConfigEvent *configEvent = mConfigEvents[0];
        mConfigEvents.removeAt(0);
        // release mLock before locking AudioFlinger mLock: lock order is always
        // AudioFlinger then ThreadBase to avoid cross deadlock
        mLock.unlock();
        mAudioFlinger->mLock.lock();
        audioConfigChanged_l(configEvent->mEvent, configEvent->mParam);
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// audioConfigChanged_l() must be called with AudioFlinger::mLock held
void AudioFlinger::audioConfigChanged_l(int event, int ioHandle, void *param2)
{
    size_t size = mNotificationClients.size();
    for (size_t i = 0; i < size; i++) {
        mNotificationClients.valueAt(i)->client()->ioConfigChanged(event, ioHandle, param2);
    }
}
—————————————————————-
        mAudioFlinger->mLock.unlock();
        delete configEvent;
        mLock.lock();
    }
    mLock.unlock();
}
—————————————————————-


        mixerStatus = MIXER_IDLE;
        { // scope for mLock


            Mutex::Autolock _l(mLock);


            if (checkForNewParameters_l()) {
                mixBufferSize = mFrameCount * mFrameSize;
                // FIXME: Relaxed timing because of a certain device that can't meet latency
                // Should be reduced to 2x after the vendor fixes the driver issue
                maxPeriod = seconds(mFrameCount) / mSampleRate * 3;
                activeSleepTime = activeSleepTimeUs();
                idleSleepTime = idleSleepTimeUs();
            }


            const SortedVector< wp<Track> >& activeTracks = mActiveTracks;


            // put audio hardware into standby after short delay
            if UNLIKELY((!activeTracks.size() && systemTime() > standbyTime) ||
                        mSuspended) {
                if (!mStandby) {
                    LOGV("Audio hardware entering standby, mixer %p, mSuspended %d\n", this, mSuspended);
                    mOutput->standby();
                    mStandby = true;
                    mBytesWritten = 0;
                }


                if (!activeTracks.size() && mConfigEvents.isEmpty()) {
                    // we're about to wait, flush the binder command buffer
                    IPCThreadState::self()->flushCommands();


                    if (exitPending()) break;


                    // wait until we have something to do…
                    LOGV("MixerThread %p TID %d going to sleep\n", this, gettid());
                    mWaitWorkCV.wait(mLock);
                    LOGV("MixerThread %p TID %d waking up\n", this, gettid());


                    if (mMasterMute == false) {
                        char value[PROPERTY_VALUE_MAX];
                        property_get("ro.audio.silent", value, "0");
                        if (atoi(value)) {
                            LOGD("Silence is golden");
                            setMasterMute(true);
                        }
                    }


                    standbyTime = systemTime() + kStandbyTimeInNsecs;
                    sleepTime = idleSleepTime;
                    continue;
                }
            }


// 此處調用瞭函數prepareTracks_l
            mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);


            // prevent any changes in effect chain list and in each effect chain
            // during mixing and effect process as the audio buffers could be deleted
            // or modified if an effect is created or deleted
            lockEffectChains_l(effectChains);
       }


        if (LIKELY(mixerStatus == MIXER_TRACKS_READY)) {
// 這兒調用瞭前面介紹的AudioMixer::process函數
            // mix buffers…
            mAudioMixer->process();
            sleepTime = 0;
            standbyTime = systemTime() + kStandbyTimeInNsecs;
            //TODO: delay standby when effects have a tail
        } else {
            // If no tracks are ready, sleep once for the duration of an output
            // buffer size, then write 0s to the output
            if (sleepTime == 0) {
                if (mixerStatus == MIXER_TRACKS_ENABLED) {
                    sleepTime = activeSleepTime;
                } else {
                    sleepTime = idleSleepTime;
                }
            } else if (mBytesWritten != 0 ||
                       (mixerStatus == MIXER_TRACKS_ENABLED && longStandbyExit)) {
                memset (mMixBuffer, 0, mixBufferSize);
                sleepTime = 0;
                LOGV_IF((mBytesWritten == 0 && (mixerStatus == MIXER_TRACKS_ENABLED && longStandbyExit)), "anticipated start");
            }
            // TODO add standby time extension fct of effect tail
        }


        if (mSuspended) {
            sleepTime = suspendSleepTimeUs();
        }
// 往硬件中寫數據
        // sleepTime == 0 means we must write to audio hardware
        if (sleepTime == 0) {
             for (size_t i = 0; i < effectChains.size(); i ++) {
                 effectChains[i]->process_l();
             }
             // enable changes in effect chain
             unlockEffectChains(effectChains);
#ifdef LVMX
            int audioOutputType = LifeVibes::getMixerType(mId, mType);
            if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType)) {
               LifeVibes::process(audioOutputType, mMixBuffer, mixBufferSize);
            }
#endif
            mLastWriteTime = systemTime();
            mInWrite = true;
            mBytesWritten += mixBufferSize;


// 是這兒寫的。
            int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
            if (bytesWritten < 0) mBytesWritten -= mixBufferSize;
            mNumWrites++;
            mInWrite = false;
            nsecs_t now = systemTime();
            nsecs_t delta = now – mLastWriteTime;
            if (delta > maxPeriod) {
                mNumDelayedWrites++;
                if ((now – lastWarning) > kWarningThrottle) {
                    LOGW("write blocked for %llu msecs, %d delayed writes, thread %p",
                            ns2ms(delta), mNumDelayedWrites, this);
                    lastWarning = now;
                }
                if (mStandby) {
                    longStandbyExit = true;
                }
            }
            mStandby = false;
        } else {
            // enable changes in effect chain
            unlockEffectChains(effectChains);
            usleep(sleepTime);
        }


        // finally let go of all our tracks, without the lock held
        // since we can't guarantee the destructors won't acquire that
        // same lock.
        tracksToRemove.clear();


        // Effect chains will be actually deleted here if they were removed from
        // mEffectChains list during mixing or effects processing
        effectChains.clear();
    }


    if (!mStandby) {
        mOutput->standby();
    }


    LOGV("MixerThread %p exiting", this);
    return false;
}
—————————————————————-
—————————————————————-


—————————————————————-


在前面,看函數AudioMixer::process__OneTrack16BitsStereoNoResampling的時候,留下瞭一個問題:
放入track_t的mainBuffer中的數據,是如何被使用的?
函數函數AudioMixer::process__OneTrack16BitsStereoNoResampling的主要功能就是將數據從audio_track_cblk_t中copy到track_t的mainBuffer中。
先看看mainBuffer的來歷。


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
mainBuffer是track_t的成員。
函數AudioMixer::setParameter中有對mainBuffer賦值。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
        if (name == MAIN_BUFFER) {
            if (mState.tracks[ mActiveTrack ].mainBuffer != valueBuf) {
                mState.tracks[ mActiveTrack ].mainBuffer = valueBuf;
                LOGV("setParameter(TRACK, MAIN_BUFFER, %p)", valueBuf);
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }
—————————————————————-
函數AudioFlinger::MixerThread::prepareTracks_l中調用瞭函數AudioMixer::setParameter。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());

這個 track之前也被使用過:
mAudioMixer->setBufferProvider(track);


track的來歷:
Track* const track = t.get();


t的來歷:
sp<Track> t = activeTracks[i].promote();


activeTracks是函數AudioFlinger::MixerThread::prepareTracks_l的參數。


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
函數函數AudioFlinger::MixerThread::prepareTracks_l在函數AudioFlinger::MixerThread::threadLoop中被調用:
mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);


activeTracks的來歷:
const SortedVector< wp<Track> >& activeTracks = mActiveTracks;


mActiveTracks是PlaybackThread的成員變量。


函數AudioFlinger::PlaybackThread::addTrack_l,往mActiveTracks中添加成員。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// addTrack_l() must be called with ThreadBase::mLock held
status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
    status_t status = ALREADY_EXISTS;


    // set retry count for buffer fill
    track->mRetryCount = kMaxTrackStartupRetries;
    if (mActiveTracks.indexOf(track) < 0) {
        // the track is newly added, make sure it fills up all its
        // buffers before playing. This is to ensure the client will
        // effectively get the latency it requested.
        track->mFillingUpStatus = Track::FS_FILLING;
        track->mResetDone = false;
        mActiveTracks.add(track);
        if (track->mainBuffer() != mMixBuffer) {
            sp<EffectChain> chain = getEffectChain_l(track->sessionId());
            if (chain != 0) {
                LOGV("addTrack_l() starting track on chain %p for session %d", chain.get(), track->sessionId());
                chain->startTrack();
            }
        }


        status = NO_ERROR;
    }


    LOGV("mWaitWorkCV.broadcast");
    mWaitWorkCV.broadcast();


    return status;
}
—————————————————————-


函數AudioFlinger::PlaybackThread::Track::start調用瞭函數AudioFlinger::PlaybackThread::addTrack_l。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::start()
{
    status_t status = NO_ERROR;
    LOGV("start(%d), calling thread %d session %d",
            mName, IPCThreadState::self()->getCallingPid(), mSessionId);
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        Mutex::Autolock _l(thread->mLock);
        int state = mState;
        // here the track could be either new, or restarted
        // in both cases "unstop" the track
        if (mState == PAUSED) {
            mState = TrackBase::RESUMING;
            LOGV("PAUSED => RESUMING (%d) on thread %p", mName, this);
        } else {
            mState = TrackBase::ACTIVE;
            LOGV("? => ACTIVE (%d) on thread %p", mName, this);
        }


        if (!isOutputTrack() && state != ACTIVE && state != RESUMING) {
            thread->mLock.unlock();
            status = AudioSystem::startOutput(thread->id(),
                                              (AudioSystem::stream_type)mStreamType,
                                              mSessionId);
            thread->mLock.lock();
        }
        if (status == NO_ERROR) {
            PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
            playbackThread->addTrack_l(this);
        } else {
            mState = state;
        }
    } else {
        status = BAD_VALUE;
    }
    return status;
}
—————————————————————-


函數AudioFlinger::TrackHandle::start調用瞭函數AudioFlinger::PlaybackThread::Track::start。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::TrackHandle::start() {
    return mTrack->start();
}
—————————————————————-


再往上拱不是很好拱,還是跳到上面往下鉆吧。


從java的測試代碼中可知,調用完write寫完數據後,一般會調用play函數。
如在函數testSetStereoVolumeMid中:
        track.write(data, 0, data.length);
        track.play();

play函數的實現:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    /**
     * Starts playing an AudioTrack.
     * @throws IllegalStateException
     */
    public void play()
    throws IllegalStateException {
        if (mState != STATE_INITIALIZED) {
            throw(new IllegalStateException("play() called on uninitialized AudioTrack."));
        }


        synchronized(mPlayStateLock) {
// 調到瞭native中的start函數。
            native_start();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
static void
android_media_AudioTrack_start(JNIEnv *env, jobject thiz)
{
// 此處的AudioTrack對象是創建AudioTrack的時候保存過去的
    AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
        thiz, javaAudioTrackFields.nativeTrackInJavaObj);
    if (lpTrack == NULL ) {
        jniThrowException(env, "java/lang/IllegalStateException",
            "Unable to retrieve AudioTrack pointer for start()");
        return;
    }


    lpTrack->start();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioTrack::start()
{
    sp<AudioTrackThread> t = mAudioTrackThread;
    status_t status;


    LOGV("start %p", this);
    if (t != 0) {
        if (t->exitPending()) {
            if (t->requestExitAndWait() == WOULD_BLOCK) {
                LOGE("AudioTrack::start called from thread");
                return;
            }
        }
        t->mLock.lock();
     }


    if (android_atomic_or(1, &mActive) == 0) {
        mNewPosition = mCblk->server + mUpdatePeriod;
        mCblk->bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS;
        mCblk->waitTimeMs = 0;
        mCblk->flags &= ~CBLK_DISABLED_ON;
        if (t != 0) {
           t->run("AudioTrackThread", THREAD_PRIORITY_AUDIO_CLIENT);
        } else {
            setpriority(PRIO_PROCESS, 0, THREAD_PRIORITY_AUDIO_CLIENT);
        }


        if (mCblk->flags & CBLK_INVALID_MSK) {
            LOGW("start() track %p invalidated, creating a new one", this);
            // no need to clear the invalid flag as this cblk will not be used anymore
            // force new track creation
            status = DEAD_OBJECT;
        } else {
// 此處的mAudioTrack是在函數AudioTrack::createTrack中創建並賦值的
// sp<IAudioTrack> track = audioFlinger->createTrack()
// mAudioTrack = track;

// 函數AudioFlinger::createTrack返回的是一個TrackHandle對象
// trackHandle = new TrackHandle(track);
// return trackHandle;
// 也就是說,此處調用的是函數AudioFlinger::TrackHandle::start
            status = mAudioTrack->start();
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::TrackHandle::start() {
// TrackHandle的構造函數中傳入瞭一個track
// 該track的創建:track = thread->createTrack_l()
// 函數AudioFlinger::PlaybackThread::createTrack_l返回的是一個AudioFlinger::PlaybackThread::Track對象
// 因此,此處調用的是函數AudioFlinger::PlaybackThread::Track::start


// !!!!接上頭瞭


// 也就是說,函數AudioFlinger::MixerThread::prepareTracks_l調用函數AudioMixer::setParameter時使用的track對象,其實就是我們創建AudioTrack時創建的那個東東
// 也就是說函數函數AudioMixer::process__OneTrack16BitsStereoNoResampling,其實是把數據copy到瞭AudioTrack對象的mian buffer中
    return mTrack->start();
}
—————————————————————-
        }
        if (status == DEAD_OBJECT) {
            LOGV("start() dead IAudioTrack: creating a new one");
            status = createTrack(mStreamType, mCblk->sampleRate, mFormat, mChannelCount,
                                 mFrameCount, mFlags, mSharedBuffer, getOutput(), false);
            if (status == NO_ERROR) {
                status = mAudioTrack->start();
                if (status == NO_ERROR) {
                    mNewPosition = mCblk->server + mUpdatePeriod;
                }
            }
        }
        if (status != NO_ERROR) {
            LOGV("start() failed");
            android_atomic_and(~1, &mActive);
            if (t != 0) {
                t->requestExit();
            } else {
                setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_NORMAL);
            }
        }
    }


    if (t != 0) {
        t->mLock.unlock();
    }
}
—————————————————————-
}
—————————————————————-
            mPlayState = PLAYSTATE_PLAYING;
        }
    }
—————————————————————-
—————————————————————-


—————————————————————-
—————————————————————-


還有一個遺留問題,
函數AudioFlinger::MixerThread::threadLoop往硬件寫數據使用的是以下代碼:
int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
mMixBuffer是MixerThread的成員變量。


而函數函數AudioMixer::process__OneTrack16BitsStereoNoResampling是把數據copy到AudioTrack對象的mainBuffer中。
這兩個東東之間是如何聯系起來的呢?


下面的這個鏈條,貌似將它們關聯瞭起來:
1、函數AudioFlinger::createTrack在調用thread->createTrack_l成功之後,會根據條件調用函數AudioFlinger::moveEffectChain_l。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
        track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelCount, frameCount, sharedBuffer, lSessionId, &lStatus);


        // move effect chain to this output thread if an effect on same session was waiting
        // for a track to be created
        if (lStatus == NO_ERROR && effectThread != NULL) {
            Mutex::Autolock _dl(thread->mLock);
            Mutex::Autolock _sl(effectThread->mLock);
            moveEffectChain_l(lSessionId, effectThread, thread, true);
        }
—————————————————————-


2、函數函數AudioFlinger::moveEffectChain_l調用瞭函數AudioFlinger::PlaybackThread::removeEffectChain_l。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    // remove chain first. This is useful only if reconfiguring effect chain on same output thread,
    // so that a new chain is created with correct parameters when first effect is added. This is
    // otherwise unecessary as removeEffect_l() will remove the chain when last effect is
    // removed.
    srcThread->removeEffectChain_l(chain);
—————————————————————-


3、函數AudioFlinger::PlaybackThread::removeEffectChain_l將PlaybackThread的mMixBuffer設置到瞭AudioTrack對象的mian buffer。
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
size_t AudioFlinger::PlaybackThread::removeEffectChain_l(const sp<EffectChain>& chain)
{
    int session = chain->sessionId();


    LOGV("removeEffectChain_l() %p from thread %p for session %d", chain.get(), this, session);


    for (size_t i = 0; i < mEffectChains.size(); i++) {
        if (chain == mEffectChains[i]) {
            mEffectChains.removeAt(i);
            // detach all tracks with same session ID from this chain
            for (size_t i = 0; i < mTracks.size(); ++i) {
                sp<Track> track = mTracks[i];
                if (session == track->sessionId()) {
                    track->setMainBuffer(mMixBuffer);
                }
            }
            break;
        }
    }
    return mEffectChains.size();
}
—————————————————————-


函數AudioFlinger::openOutput中在調用HAL層的openOutputStream成功打開一個output之後會創建一個MixerThread對象。
並將其添加到成員變量mPlaybackThreads中:
            thread = new MixerThread(this, output, id, *pDevices);
       mPlaybackThreads.add(id, thread);
###########################################################


&&&&&&&&&&&&&&&&&&&&&&&總結&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
MixerThread的threadloop函數,會先檢查數據是否已經通過AudioTrack對象寫到瞭audio_track_cblk_t中。
如果已經寫好瞭,就將數據copy到AudioTrack對象的main buffer中。
在創建AudioTrack對象的時候,已經將AudioTrack對象的mian buffer和PlaybackThread的mix buffer進行瞭關聯。
MixerThread的threadloop函數之後就會調用HAL層的write函數將數據寫到硬件。
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

摘自:江風的專欄

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *