Android Audio代碼分析2 – 函數getMinBufferSize – Android移動開發技術文章_手機開發 Android移動開發教學課程

 

AudioTrack的使用示例中,用到瞭函數getMinBufferSize,今天把它倒出來,再嚼嚼。

 

 

*****************************************源碼*************************************************

 static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {

        int channelCount = 0;

        switch(channelConfig) {

        case AudioFormat.CHANNEL_OUT_MONO:

        case AudioFormat.CHANNEL_CONFIGURATION_MONO:

            channelCount = 1;

            break;

        case AudioFormat.CHANNEL_OUT_STEREO:

        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:

            channelCount = 2;

            break;

        default:

            loge("getMinBufferSize(): Invalid channel configuration.");

            return AudioTrack.ERROR_BAD_VALUE;

        }

        

        if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT)

            && (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {

            loge("getMinBufferSize(): Invalid audio format.");

            return AudioTrack.ERROR_BAD_VALUE;

        }

        

        if ( (sampleRateInHz < 4000) || (sampleRateInHz > 48000) ) {

            loge("getMinBufferSize(): " + sampleRateInHz +"Hz is not a supported sample rate.");

            return AudioTrack.ERROR_BAD_VALUE;

        }

       

        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);

        if ((size == -1) || (size == 0)) {

            loge("getMinBufferSize(): error querying hardware");

            return AudioTrack.ERROR;

        }

        else {

            return size;

        }

    } www.aiwalls.com

***********************************************************************************************

源碼路徑:

frameworks\base\media\java\android\media\AudioTrack.java

 

 

###########################################說明##############################################################

先把自帶的註釋拿來看看吧:

    /**

     * Returns the minimum buffer size required for the successful creation of an AudioTrack

     * object to be created in the {@link #MODE_STREAM} mode. Note that this size doesn't

     * guarantee a smooth playback under load, and higher values should be chosen according to

     * the expected frequency at which the buffer will be refilled with additional data to play.

     * @param sampleRateInHz the sample rate expressed in Hertz.

     * @param channelConfig describes the configuration of the audio channels.

     *   See {@link AudioFormat#CHANNEL_OUT_MONO} and

     *   {@link AudioFormat#CHANNEL_OUT_STEREO}

     * @param audioFormat the format in which the audio data is represented.

     *   See {@link AudioFormat#ENCODING_PCM_16BIT} and

     *   {@link AudioFormat#ENCODING_PCM_8BIT}

     * @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,

     *   or {@link #ERROR} if the implementation was unable to query the hardware for its output

     *     properties,

     *   or the minimum buffer size expressed in bytes.

     */

從註釋可以看出,通過該函數獲取的最小buffer size,隻是保證在MODE_STREAM模式下成功地創建一個AudioTrack對象。

並不能保證流暢地播放。

 

 

1、參數就不說瞭,可以參考上面註釋,上一篇文章中也有說。

2、定義瞭一個內部變量:

     int channelCount = 0;

   用來記錄聲道數量。

   調用native函數native_get_min_buff_size時會用。

   可見buffer size也是由native層來決定的。

3、接下來根據Channel類型,計算聲道數量:

        switch(channelConfig) {

        case AudioFormat.CHANNEL_OUT_MONO:

        case AudioFormat.CHANNEL_CONFIGURATION_MONO:

            channelCount = 1;

            break;

        case AudioFormat.CHANNEL_OUT_STEREO:

        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:

            channelCount = 2;

            break;

        default:

            loge("getMinBufferSize(): Invalid channel configuration.");

            return AudioTrack.ERROR_BAD_VALUE;

        }

   MONO都是1,Stereo的都是2。

   不過,我們之前看過,Channel類型不止這幾種。有以下一堆呢:

    public static final int CHANNEL_OUT_FRONT_LEFT = 0x4;

    public static final int CHANNEL_OUT_FRONT_RIGHT = 0x8;

    public static final int CHANNEL_OUT_FRONT_CENTER = 0x10;

    public static final int CHANNEL_OUT_LOW_FREQUENCY = 0x20;

    public static final int CHANNEL_OUT_BACK_LEFT = 0x40;

    public static final int CHANNEL_OUT_BACK_RIGHT = 0x80;

    public static final int CHANNEL_OUT_FRONT_LEFT_OF_CENTER = 0x100;

    public static final int CHANNEL_OUT_FRONT_RIGHT_OF_CENTER = 0x200;

    public static final int CHANNEL_OUT_BACK_CENTER = 0x400;

    public static final int CHANNEL_OUT_MONO = CHANNEL_OUT_FRONT_LEFT;

    public static final int CHANNEL_OUT_STEREO = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT);

    public static final int CHANNEL_OUT_QUAD = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT);

    public static final int CHANNEL_OUT_SURROUND = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_BACK_CENTER);

    public static final int CHANNEL_OUT_5POINT1 = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_LOW_FREQUENCY | CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT);

    public static final int CHANNEL_OUT_7POINT1 = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_LOW_FREQUENCY | CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT |

            CHANNEL_OUT_FRONT_LEFT_OF_CENTER | CHANNEL_OUT_FRONT_RIGHT_OF_CENTER);

 

 

   並且,AudioFormat.CHANNEL_CONFIGURATION_MONO和AudioFormat.CHANNEL_CONFIGURATION_STEREO的定義還不包含在這一堆之中,而是在它們之前定義:

    /** Mono audio configuration */

    /** @deprecated use CHANNEL_OUT_MONO or CHANNEL_IN_MONO instead  */

    @Deprecated    public static final int CHANNEL_CONFIGURATION_MONO      = 2;

    /** Stereo (2 channel) audio configuration */

    /** @deprecated use CHANNEL_OUT_STEREO or CHANNEL_IN_STEREO instead  */

    @Deprecated    public static final int CHANNEL_CONFIGURATION_STEREO    = 3;

 

 

   難道其他的Channel類型都不需要獲取這個min buffer size???

   還是說,目前隻支持單聲道和雙聲道???

 

 

4、下面判斷音頻格式,即采樣點數據所占的bit數:

        if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT)

            && (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {

            loge("getMinBufferSize(): Invalid audio format.");

            return AudioTrack.ERROR_BAD_VALUE;

        }

   可見,隻支持16bit和8bit兩種。

 

 

5、判斷采用率:

        if ( (sampleRateInHz < 4000) || (sampleRateInHz > 48000) ) {

            loge("getMinBufferSize(): " + sampleRateInHz +"Hz is not a supported sample rate.");

            return AudioTrack.ERROR_BAD_VALUE;

        }

   隻支持4000Hz到48000Hz之間。

 

 

6、接下來調到native中去:

        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);

        if ((size == -1) || (size == 0)) {

            loge("getMinBufferSize(): error querying hardware");

            return AudioTrack.ERROR;

        }

        else {

            return size;

        }

   可見,真正幹活的是在native中,java層中隻是做些輔助操作。

 

 

   通過前文中JNI的函數對照表,可知native_get_min_buff_size函數對應的是native中的android_media_AudioTrack_get_min_buff_size函數。

   路徑:frameworks\base\core\jni\android_media_AudioTrack.cpp

 

 

   函數android_media_AudioTrack_get_min_buff_size的實現:

// returns the minimum required size for the successful creation of a streaming AudioTrack

// returns -1 if there was an error querying the hardware.

static jint android_media_AudioTrack_get_min_buff_size(JNIEnv *env,  jobject thiz,

    jint sampleRateInHertz, jint nbChannels, jint audioFormat) {

 

 

    int frameCount = 0;

    if (AudioTrack::getMinFrameCount(&frameCount, AudioSystem::DEFAULT,

            sampleRateInHertz) != NO_ERROR) {

        return -1;

    }

    return frameCount * nbChannels * (audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1);

}

 

 

   可見,最小buffer size是frameCoun乘以聲道個數,在根據音頻格式乘以1或2得到。

   聲道個數和音頻格式都是傳入的,不再說。

   frameCount是調用函數AudioTrack::getMinFrameCount取得的。從函數名可知,此處取得的應該是最小frame數。

   傳入的三個參數:

     &frameCount是用來保存frame計數的。

     sampleRateInHertz是采樣率。

     AudioSystem::DEFAULT是寫死的。其定義在類AudioSystem中,其他的定義如下:

    enum stream_type {

        DEFAULT          =-1,

        VOICE_CALL       = 0,

        SYSTEM           = 1,

        RING             = 2,

        MUSIC            = 3,

        ALARM            = 4,

        NOTIFICATION     = 5,

        BLUETOOTH_SCO    = 6,

        ENFORCED_AUDIBLE = 7, // Sounds that cannot be muted by user and must be routed to speaker

        DTMF             = 8,

        TTS              = 9,

        NUM_STREAM_TYPES

    };

 

 

   原來是stream的類型。

   為什麼不在調用getMinBufferSize的時候傳入stream類型,而在此處使用DEFAULT呢???

 

 

   先放放,繼續看函數AudioTrack::getMinFrameCount。

 

 

   函數AudioTrack::getMinFrameCount的實現:

status_t AudioTrack::getMinFrameCount(

        int* frameCount,

        int streamType,

        uint32_t sampleRate)

{

    int afSampleRate;

    if (AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) {

        return NO_INIT;

    }

    int afFrameCount;

    if (AudioSystem::getOutputFrameCount(&afFrameCount, streamType) != NO_ERROR) {

        return NO_INIT;

    }

    uint32_t afLatency;

    if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) {

        return NO_INIT;

    }

 

 

    // Ensure that buffer depth covers at least audio hardware latency

    uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate);

    if (minBufCount < 2) minBufCount = 2;

 

 

    *frameCount = (sampleRate == 0) ? afFrameCount * minBufCount :

              afFrameCount * minBufCount * sampleRate / afSampleRate;

    return NO_ERROR;

}

 

 

   開始,調用瞭三個AudioSystem的函數,似曾謀面,不過當時被無視瞭,今天看看吧。

 

 

   函數AudioSystem::getOutputSamplingRate的實現:

status_t AudioSystem::getOutputSamplingRate(int* samplingRate, int streamType)

{

    OutputDescriptor *outputDesc;

    audio_io_handle_t output;

 

 

    if (streamType == DEFAULT) {

        streamType = MUSIC;

    }

 

 

    output = getOutput((stream_type)streamType);

    if (output == 0) {

        return PERMISSION_DENIED;

    }

 

 

    gLock.lock();

    outputDesc = AudioSystem::gOutputs.valueFor(output);

    if (outputDesc == 0) {

        LOGV("getOutputSamplingRate() no output descriptor for output %d in gOutputs", output);

        gLock.unlock();

        const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();

        if (af == 0) return PERMISSION_DENIED;

        *samplingRate = af->sampleRate(output);

    } else {

        LOGV("getOutputSamplingRate() reading from output desc");

        *samplingRate = outputDesc->samplingRate;

        gLock.unlock();

    }

 

 

    LOGV("getOutputSamplingRate() streamType %d, output %d, sampling rate %d", streamType, output, *samplingRate);

 

 

    return NO_ERROR;

}

 

 

   判斷流的類型,如果是DEFAULT,將其設置為MUSIC!

   納爐嚎啕!!!

   DEFAULT的流類型原來是這麼用的。

 

 

   接下來根據stream type獲取output。

   然後獲取output的描述。

 

 

   若獲取成功,則output描述中的采樣率就是要獲取的采樣率。

   否則,嘗試從AudioFlinger中獲取采樣率。

 

 

   函數AudioSystem::getOutputFrameCount,AudioSystem::getOutputLatency,與函數AudioSystem::getOutputSamplingRate的處理類似。

 

 

   至此,采樣率,frameCount和延遲都取得瞭。

  

   接下來計算minBufCount:

    // Ensure that buffer depth covers at least audio hardware latency

    uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate);

    if (minBufCount < 2) minBufCount = 2;

 

 

   從註釋可知,buff大小應至少能覆蓋audio 硬件的延遲。

   公式不太明白。

   先看看從鏈接:https://blog.csdn.net/innost/article/details/6125779

   中摘過來的frame的說明:

     一個frame就是1個采樣點的字節數*聲道。為啥搞個frame出來?因為對於多聲道的話,用1個采樣點的字節數表示不全,

     因為播放的時候肯定是多個聲道的數據都要播出來才行。所以為瞭方便,就說1秒鐘有多少個frame,這樣就能拋開聲道數,把意思表示全瞭。

 

 

   還不是很明白。先放放。

   猜瞭半天也猜不出來。哪位大俠指點指點。

 

 

   下面計算frameCount:

         *frameCount = (sampleRate == 0) ? afFrameCount * minBufCount :

              afFrameCount * minBufCount * sampleRate / afSampleRate;

 

 

   我們的sampleRate肯定不為0,所以最後的計算應該為:afFrameCount * minBufCount * sampleRate / afSampleRate

 

摘自:江風的專欄

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *