Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
175 views
in Technique[技术] by (71.8m points)

ubuntu - Running apps while training models

So I'm new to deep learning (have not yet built a machine for it) and would like to know - is it possible to run other apps in Ubuntu while training a model? In particular, can I play a game in Dosbox while I'm waiting for the training to finish? It looks like there's a way to reserve one CPU core and part of the GPU via Tensorflow, but there doesn't seem to be any such functionality for RAM. Is there any way around that?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Welcome to StackOverflow! You don't need to build a dedicated machine to experiment with deep learning. There are plenty of options in the cloud, and the CPU can get you pretty far with the toy datasets that most tutorial websites and introductory courses use.

In particular, can I play a game in Dosbox while I'm waiting for the training to finish?

There is no way I could answer this without more details. It would depend on several factors:

  • Your mini-batch size during training.
  • How much RAM your system has.
  • If you are training on the GPU or the GPU.
  • Your hard drive configuration (is your dataset on a separate drive from the OS).
  • Your hard drive transfer rates and form factor (SSD vs HSSD vs NAS vs HDD).
  • The number of PCIe lanes on your motherboard.
  • The compatibility of your CPU with said PCIe lanes.
  • The cooling capacity of your system (both CPU and GPU) and any throttling settings you have enabled.
  • And probably about a dozen other variables I could name off the top of my head.

It looks like there's a way to reserve one CPU core and part of the GPU via Tensorflow, but there doesn't seem to be any such functionality for RAM. Is there any way around that?

There are lots of ways around this. The first that comes to mind is simply decrease the batch size during training, this will decrease your RAM usage and the OS will be free to re-allocate it for your gaming needs. Another solution is to decrease the number of asynchronous threads you are using with your data loaders, to similar effect.

Do note that if your data set is large enough, you won't be using much RAM during training. Most of your computational time will be spent performing backprop on the GPU. There will be some RAM usage as training samples are DMA fed to the GPU, but you are more likely to have your gaming interfered with by utilizing so much of your GPU's memory and compute capacity.

For more information see:


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...