Android - How to add my own Audio codec to AudioRecord?

I currently have a Loop back program for testing Audio on Android devices.

It uses AudioRecord and AudioTrack to record PCM audio from the Mic and play PCM audio out the earpiece.

Here is the code:

public class Record extends Thread

          static final int bufferSize = 200000;
          final short[] buffer = new short[bufferSize];
          short[] readBuffer = new short[bufferSize];

          public void run() {  
            isRecording = true;

            int buffersize = AudioRecord.getMinBufferSize(11025,

                           arec = new AudioRecord(MediaRecorder.AudioSource.MIC,

                           atrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL,


                           byte[] buffer = new byte[buffersize];

                           while(isRecording) {

                         , 0, buffersize);
                                   atrack.write(buffer, 0, buffer.length);

So as you can see in the creation of the AudioTrack and AudioRecord the Encoding is supplied via the AudioFormat but this only allows 16 bit or 8 bit PCM.

I have my own G711 Codec implementation now and I want to be able to encode the audio from the Mic and decode it going into the EarPiece, So I have encode(short lin[], int offset, byte enc[], int frames) and decode(byte enc[], short lin[], int frames) methods but I'm unsure as to how to use them to encode and the decode the audio from the AudioRecord and AudioTrack.

Can anyone help me or point me in the right direction?


Change your, 0, buffersize) call to use the Bytebuffer read() method from AudioRecord.

Once you have your bytes into the ByteBuffer object, then you can just insert your G711 implementation call of encode and use the ByteBuffer.asShortBuffer() method to get your captured PCM data into the encoder.

That would solve your initial question without having to introduce a third party library to do that work for you. (This answer is for future people that come across the question).

My question is why?

In your code above you capture PCM data from the microphone, and write it directly to the buffer for playback.

It doesn't make any sense in your implementation to follow the path of PCM -> G711 (encode) -> G711 (decode) -> PCM. All you are doing is introducing unnecessary processing and latency. Now, if you were going to write encoded data to a file instead of trying to play it through the ear piece that would be a different story but your current code doesn't really seem useful to encode the PCM data.

Introducing your own codec here would only make sense in the context of writing the compressed voice data to a file (recording call data for example in a compressed manner) or sending it over the network or something.

I realize this is a pretty old post. Were you able to get your own G711 working? My own initial thought would be to use a lib compiled for the kernel and use JNI to call it.

Need Your Help

Associative table with date

mysql database

In my application I have association between two entities employees and work-groups.

Why does perfmon fail to give available memory and what are the alternatives?

windows memory perfmon

I am trying to get reliable information on when my C# application (Windows XP) will run out of memory. I did some research and tests on my machine and picked the most reliable perfmon counters: