Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
625 views
in Technique[技术] by (71.8m points)

machine learning - What is a batch in TensorFlow?

The introductory documentation, which I am reading (TOC here) uses the term "batch" (for instance here) without having defined it.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing inference), compute the loss, compute the gradient, and then update the parameters of your network (i.e. weights and biases) and then proceed with the next image ... This way of training the model is sometimes called as online learning.

But, you want the training to be faster, the gradients to be less noisy, and also take advantage of the power of GPUs which are efficient at doing array operations (nD-arrays to be specific). So, what you instead do is feed in say 100 images at a time (the choice of this size is up to you (i.e. it's a hyperparameter) and depends on your problem too). For instance, take a look at the below picture, (Author: Martin Gorner)

Batch size of 100

Here, since you're feeding in 100 images(28x28) at a time (instead of 1 as in the online training case), the batch size is 100. Oftentimes this is called as mini-batch size or simply mini-batch.


Also the below picture: (Author: Martin Gorner)

batch size again

Now, the matrix multiplication will all just work out perfectly fine and you will also be taking advantage of the highly optimized array operations and hence achieve faster training time.

If you observe the above picture, it doesn't matter that much whether you give 100 or 256 or 2048 or 10000 (batch size) images as long as it fits in the memory of your (GPU) hardware. You'll simply get that many predictions.

But, please keep in mind that this batch size influences the training time, the error that you achieve, the gradient shifts etc., There is no general rule of thumb as to which batch size works out best. Just try a few sizes and pick the one which works best for you. But try not to use large batch sizes since it will overfit the data. People commonly use mini-batch sizes of 32, 64, 128, 256, 512, 1024, 2048.


Bonus: To get a good grasp of how crazy you can go with this batch size, please give this paper a read: weird trick for parallelizing CNNs


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...