2025-02-15

 

計劃從接口的使用,開始分析Audio相關源碼。

此處的代碼為Android中自帶的測試代碼。

由於本人惰性,不打算將所有函數全部細說。主要函數,會拿來細細品味;本人認為非主要的函數,將一筆帶過。

主要非主要,是從本人當前項目的需要來看的。

 

 

*****************************************源碼*************************************************

public void testWriteByte() throws Exception {

        // constants for test

        final String TEST_NAME = "testWriteByte";

        final int TEST_SR = 22050;

        final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;

        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;

        final int TEST_MODE = AudioTrack.MODE_STREAM;

        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

       

        //——– initialization ————–

        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);

        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,

                2*minBuffSize, TEST_MODE);

        byte data[] = new byte[minBuffSize];

        //——–    test        ————–

        assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);

        assertTrue(TEST_NAME,

                track.write(data, 0, data.length) == data.length);

        //——– tear down      ————–

        track.release();

    } www.aiwalls.com

***********************************************************************************************

源碼路徑:

frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java

 

 

###########################################說明##############################################################

1、TEST_NAME就不作說明瞭。

2、TEST_SR,是函數AudioTrack.getMinBufferSize的第一個參數。

   關於該參數的註釋為:

     the sample rate expressed in Hertz. 也就是以赫茲為單位的采樣率。

   函數AudioTrack.getMinBufferSize將會細品,此處就不再累述。

3、TEST_CONF,是函數AudioTrack.getMinBufferSize的第二個參數。

   關於該參數的註釋為:

     describes the configuration of the audio channels.

     *   See {@link AudioFormat#CHANNEL_OUT_MONO} and

     *   {@link AudioFormat#CHANNEL_OUT_STEREO}

   我們看到,其賦值為AudioFormat.CHANNEL_OUT_MONO。那就先說說AudioFormat。

   類AudioFormat的英文註釋如下:

/**

 * The AudioFormat class is used to access a number of audio format and

 * channel configuration constants. They are for instance used

 * in {@link AudioTrack} and {@link AudioRecord}.

 *

 */

   看瞭下其內容,主要包括各種track和record的channel的定義,和一些格式定義。

   我們此處預備創建一個AudioTrack,可用的Channel類型如下:

    public static final int CHANNEL_OUT_FRONT_LEFT = 0x4;

    public static final int CHANNEL_OUT_FRONT_RIGHT = 0x8;

    public static final int CHANNEL_OUT_FRONT_CENTER = 0x10;

    public static final int CHANNEL_OUT_LOW_FREQUENCY = 0x20;

    public static final int CHANNEL_OUT_BACK_LEFT = 0x40;

    public static final int CHANNEL_OUT_BACK_RIGHT = 0x80;

    public static final int CHANNEL_OUT_FRONT_LEFT_OF_CENTER = 0x100;

    public static final int CHANNEL_OUT_FRONT_RIGHT_OF_CENTER = 0x200;

    public static final int CHANNEL_OUT_BACK_CENTER = 0x400;

    public static final int CHANNEL_OUT_MONO = CHANNEL_OUT_FRONT_LEFT;

    public static final int CHANNEL_OUT_STEREO = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT);

    public static final int CHANNEL_OUT_QUAD = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT);

    public static final int CHANNEL_OUT_SURROUND = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_BACK_CENTER);

    public static final int CHANNEL_OUT_5POINT1 = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_LOW_FREQUENCY | CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT);

    public static final int CHANNEL_OUT_7POINT1 = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |

            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_LOW_FREQUENCY | CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT |

            CHANNEL_OUT_FRONT_LEFT_OF_CENTER | CHANNEL_OUT_FRONT_RIGHT_OF_CENTER);

 

 

   從以下註釋可知,此處的Channel定義,應該與include/media/AudioSystem.h中的保持一致。

   // Channel mask definitions must be kept in sync with native values in include/media/AudioSystem.h

 

 

4、TEST_FORMAT,是函數AudioTrack.getMinBufferSize的第三個參數。

   關於該參數的註釋為:

   the format in which the audio data is represented.

     *   See {@link AudioFormat#ENCODING_PCM_16BIT} and

     *   {@link AudioFormat#ENCODING_PCM_8BIT}

   其賦值為AudioFormat.ENCODING_PCM_16BIT。還在類AudioFormat中。

   可用的類型如下:

    /** Audio data format: PCM 16 bit per sample */

    public static final int ENCODING_PCM_16BIT = 2; // accessed by native code

    /** Audio data format: PCM 8 bit per sample */

    public static final int ENCODING_PCM_8BIT = 3;  // accessed by native code

 

 

5、TEST_MODE,是AudioTrack的構造函數的第六個參數。

   該參數的註釋如下:

   streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}

   其賦值為AudioTrack.MODE_STREAM。是類AudioTrack中定義的常量。

   可用的類型有以下兩種:

    /**

     * Creation mode where audio data is transferred from Java to the native layer

     * only once before the audio starts playing.

     */

    public static final int MODE_STATIC = 0;

    /**

     * Creation mode where audio data is streamed from Java to the native layer

     * as the audio is playing.

     */

    public static final int MODE_STREAM = 1;

 

 

   看瞭下類AudioTrack的註釋,其中大部分內容都是說MODE_STATIC與MODE_STREAM的差別的。

   註釋如下:

/**

 * The AudioTrack class manages and plays a single audio resource for Java applications.

 * It allows to stream PCM audio buffers to the audio hardware for playback. This is

 * achieved by "pushing" the data to the AudioTrack object using one of the

 *  {@link #write(byte[], int, int)} and {@link #write(short[], int, int)} methods.

 * 

 * <p>An AudioTrack instance can operate under two modes: static or streaming.<br>

 * In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using

 * one of the write() methods. These are blocking and return when the data has been transferred

 * from the Java layer to the native layer and queued for playback. The streaming mode

 *  is most useful when playing blocks of audio data that for instance are:

 * <ul>

 *   <li>too big to fit in memory because of the duration of the sound to play,</li>

 *   <li>too big to fit in memory because of the characteristics of the audio data

 *         (high sampling rate, bits per sample …)</li>

 *   <li>received or generated while previously queued audio is playing.</li>

 * </ul>

 * The static mode is to be chosen when dealing with short sounds that fit in memory and

 * that need to be played with the smallest latency possible. AudioTrack instances in static mode

 * can play the sound without the need to transfer the audio data from Java to native layer

 * each time the sound is to be played. The static mode will therefore be preferred for UI and

 * game sounds that are played often, and with the smallest overhead possible.

 *

 * <p>Upon creation, an AudioTrack object initializes its associated audio buffer.

 * The size of this buffer, specified during the construction, determines how long an AudioTrack

 * can play before running out of data.<br>

 * For an AudioTrack using the static mode, this size is the maximum size of the sound that can

 * be played from it.<br>

 * For the streaming mode, data will be written to the hardware in chunks of

 * sizes inferior to the total buffer size.

 */

   主要內容是說:

     MODE_STREAM是采用流的方式。也就是說,隨著文件的播放,不停地有數據從Java層傳到Native層。

       這中模式適合比較大的,並且對延遲沒有要求的音頻文件。

     MODE_STATIC是一次將數據從Java層傳到Native層。

       這種模式時候數據量小(應為要存在內存中,要考慮內存消耗),並且對延遲有要求的音頻。

   詳細說明,可以仔細閱讀英文註釋。

 

 

6、TEST_STREAM_TYPE,是類AudioTrack構造函數中的第一個參數。

   該參數的註釋如下:

    the type of the audio stream. See

     *   {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},

     *   {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC} and

     *   {@link AudioManager#STREAM_ALARM}

   賦值的類型為AudioManager.STREAM_MUSIC,是類AudioManager中定義的常量。

   可用的有以下十種類型:

    /** The audio stream for phone calls */

    public static final int STREAM_VOICE_CALL = AudioSystem.STREAM_VOICE_CALL;

    /** The audio stream for system sounds */

    public static final int STREAM_SYSTEM = AudioSystem.STREAM_SYSTEM;

    /** The audio stream for the phone ring */

    public static final int STREAM_RING = AudioSystem.STREAM_RING;

    /** The audio stream for music playback */

    public static final int STREAM_MUSIC = AudioSystem.STREAM_MUSIC;

    /** The audio stream for alarms */

    public static final int STREAM_ALARM = AudioSystem.STREAM_ALARM;

    /** The audio stream for notifications */

    public static final int STREAM_NOTIFICATION = AudioSystem.STREAM_NOTIFICATION;

    /** @hide The audio stream for phone calls when connected to bluetooth */

    public static final int STREAM_BLUETOOTH_SCO = AudioSystem.STREAM_BLUETOOTH_SCO;

    /** @hide The audio stream for enforced system sounds in certain countries (e.g camera in Japan) */

    public static final int STREAM_SYSTEM_ENFORCED = AudioSystem.STREAM_SYSTEM_ENFORCED;

    /** The audio stream for DTMF Tones */

    public static final int STREAM_DTMF = AudioSystem.STREAM_DTMF;

    /** @hide The audio stream for text to speech (TTS) */

    public static final int STREAM_TTS = AudioSystem.STREAM_TTS;

 

 

   類AudioManager的註釋如下:

   AudioManager provides access to volume and ringer mode control.

 

 

   各種類型的賦值都是從類AudioSystem中而來,類AudioSystem中的相關定義如下:

    /* FIXME: Need to finalize this and correlate with native layer */

    /*

     * If these are modified, please also update Settings.System.VOLUME_SETTINGS

     * and attrs.xml

     */

    /* The audio stream for phone calls */

    public static final int STREAM_VOICE_CALL = 0;

    /* The audio stream for system sounds */

    public static final int STREAM_SYSTEM = 1;

    /* The audio stream for the phone ring and message alerts */

    public static final int STREAM_RING = 2;

    /* The audio stream for music playback */

    public static final int STREAM_MUSIC = 3;

    /* The audio stream for alarms */

    public static final int STREAM_ALARM = 4;

    /* The audio stream for notifications */

    public static final int STREAM_NOTIFICATION = 5;

    /* @hide The audio stream for phone calls when connected on bluetooth */

    public static final int STREAM_BLUETOOTH_SCO = 6;

    /* @hide The audio stream for enforced system sounds in certain countries (e.g camera in Japan) */

    public static final int STREAM_SYSTEM_ENFORCED = 7;

    /* @hide The audio stream for DTMF tones */

    public static final int STREAM_DTMF = 8;

    /* @hide The audio stream for text to speech (TTS) */

    public static final int STREAM_TTS = 9;

   從註釋中可知,需要將此處的定義與Native層的正確關聯。

   並且,如果這些內容改變,需要更新Settings.System.VOLUME_SETTINGS和attrs.xml。

 

 

7、下面是代碼:

     int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);

   從函數的名字可知,是獲取最小Buffer的大小。也就是說,如果想讓我正常工作,至少要給我這些Buffer。

   提該要求的依據有采樣率、Channel數量和樣本大小(8BIT還是16BIT)。

 

 

8、接下來就是創建一個AudioTrack對象:

      AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,

                2*minBuffSize, TEST_MODE);

   參數中,TEST_SR, TEST_CONF, TEST_FORMAT和函數AudioTrack.getMinBufferSize的相同。

   TEST_STREAM_TYPE是流動類型。

   minBuffSize是上面請求到的最小的Buffer Size。不過此處為何會乘以個2???

   看瞭下類AudioTrack的構造函數中的註釋:

     * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read

     *   from for playback. If using the AudioTrack in streaming mode, you can write data into

     *   this buffer in smaller chunks than this size. If using the AudioTrack in static mode,

     *   this is the maximum size of the sound that will be played for this instance.

     *   See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size

     *   for the successful creation of an AudioTrack instance in streaming mode. Using values

     *   smaller than getMinBufferSize() will result in an initialization failure.

   還是不明白。

   再看下函數getMinBufferSize的註釋:

     * @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,

     *   or {@link #ERROR} if the implementation was unable to query the hardware for its output

     *     properties,

     *   or the minimum buffer size expressed in bytes.

   函數getMinBufferSize的返回值是以byte為單位,AudioTrack構造函數中的參數也是以byte為單位,況且接下來的語句:

      byte data[] = new byte[minBuffSize];

   創建的buffer的大小也是minBuffSize。

   究竟為何乘個2???

   AudioTrack的構造函數中會做Buffer size check:

     audioBuffSizeCheck(bufferSizeInBytes);

 

 

   函數audioBuffSizeCheck的註釋如下:

    // Convenience method for the contructor's audio buffer size check.

    // preconditions:

    //    mChannelCount is valid

    //    mAudioFormat is valid

    // postcondition:

    //    mNativeBufferSizeInBytes is valid (multiple of frame size, positive)

   需要保證buffer size為正數,並且是frame的整數倍。

   frame是個嘛概念?看看函數audioBuffSizeCheck的實現:

 private void audioBuffSizeCheck(int audioBufferSize) {

        // NB: this section is only valid with PCM data.

        //     To update when supporting compressed formats

        int frameSizeInBytes = mChannelCount

                * (mAudioFormat == AudioFormat.ENCODING_PCM_8BIT ? 1 : 2);

        if ((audioBufferSize % frameSizeInBytes != 0) || (audioBufferSize < 1)) {

            throw (new IllegalArgumentException("Invalid audio buffer size."));

        }

 

 

        mNativeBufferSizeInBytes = audioBufferSize;

    }

   我們的Channel為AudioFormat.CHANNEL_OUT_MONO,所以mChannelCount為1,mAudioFormat為2,所以frameSizeInBytes等於2。

   如果audioBufferSize不是2(frameSizeInBytes)的整數倍,將會拋出異常!!!

   納爐嚎啕!!!

 

 

9、創建Buffer:

     byte data[] = new byte[minBuffSize];

   可以,Java層中Buffer大小仍然為minBuffSize。

   乘以2的,是傳給Native層的:

        // native initialization

        int initResult = native_setup(new WeakReference<AudioTrack>(this),

                mStreamType, mSampleRate, mChannels, mAudioFormat,

                mNativeBufferSizeInBytes, mDataLoadMode, session);

   也就是說,要保證Native中,buffer的大小為frame的整數倍。

 

 

10、接下來是狀態判斷:

      assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);

    Android中Media操作時,涉及到一個狀態問題。

    也就是說,從一個狀態,隻能遷移到特定的一個或多個狀態。即,需要在特定的狀態下操作才有效,否則將導致錯誤。

    函數getState的註釋:

    /**

     * Returns the state of the AudioTrack instance. This is useful after the

     * AudioTrack instance has been created to check if it was initialized

     * properly. This ensures that the appropriate hardware resources have been

     * acquired.

     * @see #STATE_INITIALIZED

     * @see #STATE_NO_STATIC_DATA

     * @see #STATE_UNINITIALIZED

     */

 

 

11、下面開始寫數據:

    assertTrue(TEST_NAME,

                track.write(data, 0, data.length) == data.length);

    write函數將會細品,此處不再累述。

    其註釋如下,可以先對其有個大致瞭解:

    /**

     * Writes the audio data to the audio hardware for playback.

     * @param audioData the array that holds the data to play.

     * @param offsetInBytes the offset expressed in bytes in audioData where the data to play

     *    starts.

     * @param sizeInBytes the number of bytes to read in audioData after the offset.

     * @return the number of bytes that were written or {@link #ERROR_INVALID_OPERATION}

     *    if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if

     *    the parameters don't resolve to valid data and indexes.

     */

 

 

12、最後一步操作:

      track.release();

 

 

    其實現:

    /**

     * Releases the native AudioTrack resources.

     */

    public void release() {

        // even though native_release() stops the native AudioTrack, we need to stop

        // AudioTrack subclasses too.

        try {

            stop();

        } catch(IllegalStateException ise) {

            // don't raise an exception, we're releasing the resources.

        }

        native_release();

        mState = STATE_UNINITIALIZED;

    }

    先調用自己的stop函數,然後再調到native層中的native_release函數。

 

 

    stop函數的實現:

    /**

     * Stops playing the audio data.

     * @throws IllegalStateException

     */

    public void stop()

    throws IllegalStateException {

        if (mState != STATE_INITIALIZED) {

            throw(new IllegalStateException("stop() called on uninitialized AudioTrack."));

        }

 

 

        // stop playing

        synchronized(mPlayStateLock) {

            native_stop();

            mPlayState = PLAYSTATE_STOPPED;

        }

    }

    先判斷狀態,然後調到native層的native_stop函數。

 

 

    如果從Java層調到native層?是通過JNI機制。

    就不在此介紹JNI機制瞭。

 

 

    上面提到的兩個native中的函數,都是在文件:frameworks\base\core\jni\android_media_AudioTrack.cpp

    中進行關聯的:

static JNINativeMethod gMethods[] = {

    // name,              signature,     funcPtr

    {"native_start",         "()V",      (void *)android_media_AudioTrack_start},

    {"native_stop",          "()V",      (void *)android_media_AudioTrack_stop},

    {"native_pause",         "()V",      (void *)android_media_AudioTrack_pause},

    {"native_flush",         "()V",      (void *)android_media_AudioTrack_flush},

    {"native_setup",         "(Ljava/lang/Object;IIIIII[I)I",

                                         (void *)android_media_AudioTrack_native_setup},

    {"native_finalize",      "()V",      (void *)android_media_AudioTrack_native_finalize},

    {"native_release",       "()V",      (void *)android_media_AudioTrack_native_release},

    {"native_write_byte",    "([BIII)I", (void *)android_media_AudioTrack_native_write},

    {"native_write_short",   "([SIII)I", (void *)android_media_AudioTrack_native_write_short},

    {"native_setVolume",     "(FF)V",    (void *)android_media_AudioTrack_set_volume},

    {"native_get_native_frame_count",

                             "()I",      (void *)android_media_AudioTrack_get_native_frame_count},

    {"native_set_playback_rate",

                             "(I)I",     (void *)android_media_AudioTrack_set_playback_rate},

    {"native_get_playback_rate",

                             "()I",      (void *)android_media_AudioTrack_get_playback_rate},

    {"native_set_marker_pos","(I)I",     (void *)android_media_AudioTrack_set_marker_pos},

    {"native_get_marker_pos","()I",      (void *)android_media_AudioTrack_get_marker_pos},

    {"native_set_pos_update_period",

                             "(I)I",     (void *)android_media_AudioTrack_set_pos_update_period},

    {"native_get_pos_update_period",

                             "()I",      (void *)android_media_AudioTrack_get_pos_update_period},

    {"native_set_position",  "(I)I",     (void *)android_media_AudioTrack_set_position},

    {"native_get_position",  "()I",      (void *)android_media_AudioTrack_get_position},

    {"native_set_loop",      "(III)I",   (void *)android_media_AudioTrack_set_loop},

    {"native_reload_static", "()I",      (void *)android_media_AudioTrack_reload},

    {"native_get_output_sample_rate",

                             "(I)I",      (void *)android_media_AudioTrack_get_output_sample_rate},

    {"native_get_min_buff_size",

                             "(III)I",   (void *)android_media_AudioTrack_get_min_buff_size},

    {"native_setAuxEffectSendLevel",

                             "(F)V",     (void *)android_media_AudioTrack_setAuxEffectSendLevel},

    {"native_attachAuxEffect",

                             "(I)I",     (void *)android_media_AudioTrack_attachAuxEffect},

};

 

 

    native_stop對應的函數為android_media_AudioTrack_stop:

static void

android_media_AudioTrack_stop(JNIEnv *env, jobject thiz)

{

    AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(

        thiz, javaAudioTrackFields.nativeTrackInJavaObj);

    if (lpTrack == NULL ) {

        jniThrowException(env, "java/lang/IllegalStateException",

            "Unable to retrieve AudioTrack pointer for stop()");

        return;

    }

 

 

    lpTrack->stop();

}

 

 

    native_release對應的函數為android_media_AudioTrack_native_release:

static void android_media_AudioTrack_native_release(JNIEnv *env,  jobject thiz) {

      

    // do everything a call to finalize would

    android_media_AudioTrack_native_finalize(env, thiz);

    // + reset the native resources in the Java object so any attempt to access

    // them after a call to release fails.

    env->SetIntField(thiz, javaAudioTrackFields.nativeTrackInJavaObj, 0);

    env->SetIntField(thiz, javaAudioTrackFields.jniData, 0);

}

 

 

    函數android_media_AudioTrack_native_finalize的實現:

static void android_media_AudioTrack_native_finalize(JNIEnv *env,  jobject thiz) {

    //LOGV("android_media_AudioTrack_native_finalize jobject: %x\n", (int)thiz);

      

    // delete the AudioTrack object

    AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(

        thiz, javaAudioTrackFields.nativeTrackInJavaObj);

    if (lpTrack) {

        //LOGV("deleting lpTrack: %x\n", (int)lpTrack);

        lpTrack->stop();

        delete lpTrack;

    }

   

    // delete the JNI data

    AudioTrackJniStorage* pJniStorage = (AudioTrackJniStorage *)env->GetIntField(

        thiz, javaAudioTrackFields.jniData);

    if (pJniStorage) {

        // delete global refs created in native_setup

        env->DeleteGlobalRef(pJniStorage->mCallbackData.audioTrack_class);

        env->DeleteGlobalRef(pJniStorage->mCallbackData.audioTrack_ref);

        //LOGV("deleting pJniStorage: %x\n", (int)pJniStorage);

        delete pJniStorage;

    }

}

 

 

    函數android_media_AudioTrack_stop和android_media_AudioTrack_native_finalize都調用瞭函數env->GetIntField:

    AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(

        thiz, javaAudioTrackFields.nativeTrackInJavaObj);

    看意思應該是獲取Java側保存的native的Track對象。

    既然此處是Get,那就應該有地方去Set。

    不錯,上面的函數android_media_AudioTrack_native_release中就有去Set:

      env->SetIntField(thiz, javaAudioTrackFields.nativeTrackInJavaObj, 0);

    不過,此處是將其清0。

    真正Set的地方在哪兒?一個字,搜!

 

 

    且慢,先看看javaAudioTrackFields是個嘛東東:

struct fields_t {

    // these fields provide access from C++ to the…

    jclass    audioTrackClass;       //… AudioTrack class

    jmethodID postNativeEventInJava; //… event post callback method

    int       PCM16;                 //…  format constants

    int       PCM8;                  //…  format constants

    int       STREAM_VOICE_CALL;     //…  stream type constants

    int       STREAM_SYSTEM;         //…  stream type constants

    int       STREAM_RING;           //…  stream type constants

    int       STREAM_MUSIC;          //…  stream type constants

    int       STREAM_ALARM;          //…  stream type constants

    int       STREAM_NOTIFICATION;   //…  stream type constants

    int       STREAM_BLUETOOTH_SCO;  //…  stream type constants

    int       STREAM_DTMF;           //…  stream type constants

    int       MODE_STREAM;           //…  memory mode

    int       MODE_STATIC;           //…  memory mode

    jfieldID  nativeTrackInJavaObj;  // stores in Java the native AudioTrack object

    jfieldID  jniData;      // stores in Java additional resources used by the native AudioTrack

};

static fields_t javaAudioTrackFields;

    原來是為瞭提供一個從C++訪問…的區域,此處…應該是Java,是不是以後也擴展到其他語言?

    其中nativeTrackInJavaObj是保存在Java側的native的AudioTrack對象。

 

 

    繼續剛才的話題,搜!

    原來在函數android_media_AudioTrack_native_setup中調用瞭函數env->SetIntField來實現set的。

    文件路徑:frameworks\base\core\jni\android_media_AudioTrack.cpp

    與剛才從Java層調到native中的入口相同,也就是說函數android_media_AudioTrack_native_setup應該也是從Java層調過來的。

    找瞭下對應表,果然,對應的是native_setup函數。

 

 

    函數android_media_AudioTrack_native_setup的內容就先不細嚼瞭,大致處理如下:

      參數及狀態檢查

      創建一個AudioTrack對象。

      調用AudioTrack對象的一些初始化和設置函數。

      最後將AudioTrack對象通過env->SetIntField函數保存到Java層。

    與此類似處理的還有一個AudioTrackJniStorage對象。

 

 

總結一下使用示例:

1、首先根據采用率,樣本大小,聲道數獲取一個最小需要的buffersize。

2、根據流的類型,模式(stream或static),1中獲取的最小buffersize(為瞭native中的buffer size是frame的整數倍,此處乘瞭個2),以及采用率,樣本大小,聲道數來創建

一個AudioTrack。此處的AudioTrack是Java中的類,其構造函數最終會調到native中,並創建一個native中的AudioTrack類,並通過函數env->SetIntField將其保存到Java的

AudioTrack對象中。

3、調用AudioTrack對象的write函數,此處直接掉的是Java的AudioTrack對象,函數write中應該會調到native中的AudioTrack對象。信不信由你,反正我是信瞭。

4、調用release函數,停止播放,並釋放資源

摘自:江風的專欄

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *