I am currently developing an Android Application that has audio recording and playing. I am new to dealing with audio and I'm having some trouble with encoding and formats.
I am able to record and play the audio in my application, but when exporting I am not able to reproduce the audio. The only way I found was exporting my .pcm file and converting using Audacity.
This is my code to record the audio is:
private Thread recordingThread
private AudioRecord mRecorder;
private boolean isRecording = false;
private void startRecording() {
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
Constants.RECORDER_SAMPLERATE, Constants.RECORDER_CHANNELS,
Constants.RECORDER_AUDIO_ENCODING, Constants.BufferElements2Rec * Constants.BytesPerElement);
mRecorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
public void run() {
writeAudioDataToFile();
}
}, "AudioRecorder Thread");
recordingThread.start();
}
private void writeAudioDataToFile() {
// Write the output audio in byte
FileOutputStream os = null;
try {
os = new FileOutputStream(mFileName);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
while (isRecording) {
// gets the voice output from microphone to byte format
mRecorder.read(sData, 0, Constants.BufferElements2Rec);
try {
// // writes the data to file from buffer
// // stores the voice buffer
byte bData[] = short2byte(sData);
os.write(bData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement);
} catch (IOException e) {
e.printStackTrace();
}
}
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
To play the recorded audio, the code is:
private void startPlaying() {
new Thread(new Runnable() {
public void run() {
try {
File file = new File(mFileName);
byte[] audioData = null;
InputStream inputStream = new FileInputStream(mFileName);
audioData = new byte[Constants.BufferElements2Rec];
mPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, Constants.RECORDER_SAMPLERATE,
AudioFormat.CHANNEL_OUT_MONO, Constants.RECORDER_AUDIO_ENCODING,
Constants.BufferElements2Rec * Constants.BytesPerElement, AudioTrack.MODE_STREAM);
final float duration = (float) file.length() / Constants.RECORDER_SAMPLERATE / 2;
Log.i(TAG, "PLAYBACK AUDIO");
Log.i(TAG, String.valueOf(duration));
mPlayer.setPositionNotificationPeriod(Constants.RECORDER_SAMPLERATE / 10);
mPlayer.setNotificationMarkerPosition(Math.round(duration * Constants.RECORDER_SAMPLERATE));
mPlayer.play();
int i = 0;
while ((i = inputStream.read(audioData)) != -1) {
try {
mPlayer.write(audioData, 0, i);
} catch (Exception e) {
Log.e(TAG, "Exception: " + e.getLocalizedMessage());
}
}
} catch (FileNotFoundException fe) {
Log.e(TAG, "File not found: " + fe.getLocalizedMessage());
} catch (IOException io) {
Log.e(TAG, "IO Exception: " + io.getLocalizedMessage());
}
}
}).start();
}
The constants defined in a Constants class are:
public class Constants {
final static public int RECORDER_SAMPLERATE = 44100;
final static public int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
final static public int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
final static public int BufferElements2Rec = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
final static public int BytesPerElement = 2; // 2 bytes in 16bit format
}
If I export the file as it is, I convert it with Audacity and it plays. I do, however, need to export it in a format that can be played automatically.
I've seen answers to implement Lame and am currently working on it. I've also found an answer to convert it using:
private File rawToWave(final File rawFile, final String filePath) throws IOException {
File waveFile = new File(filePath);
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, Constants.RECORDER_SAMPLERATE); // sample rate
writeInt(output, Constants.RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(bytes.array());
} finally {
if (output != null) {
output.close();
}
}
return waveFile;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
But this, when exported, plays with the correct duration but just white noise.
Some of the answers that I've tried but wasn't able to work:
Anyone can point out what is the best solution? Is it really implementing lame or can it be done on a more straight forward way? If so, why is the code sample converting the file to just white noise?
See Question&Answers more detail:
os