Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
259 views
in Technique[技术] by (71.8m points)

In Kubernetes, are resource quotas a good way to throttle how much CPU and memory is allowed for running jobs at a given time?

Suppose I have an API that allows users to create jobs (V1Jobs) in Kubernetes. What I would like is for the user to be able to submit as many jobs as they want without a failed response from the API but to have Kubernetes queue/throttle the jobs until there are enough available resources in the given namespace. For example, suppose I create a resource quota and specify a limit of 1cpu and 1Gi memory. Then suppose a user submits 100 1cpu/1Gi jobs. I'd like Kubernetes to process one at a time until they are complete. In other words running 100 jobs one at a time. Is creating the resource quota and letting the job-controller/scheduler handle the throttling the right way to go or would there be benefits to handle tracking the cluster usage externally (in an application) and only submit/create the V1Jobs to the API once there is capacity in the namespace?

question from:https://stackoverflow.com/questions/65927390/in-kubernetes-are-resource-quotas-a-good-way-to-throttle-how-much-cpu-and-memor

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

ResourceQuotas are a good start, limiting the amount of resources a that may be used within a namespace, or by resources matching an expression.

It would indeed prevent the scheduler from creating Pods that would exceed your quota limitations. The API would still accept new Job objects posted by clients. If you have N Jobs requesting 1 CPU/1G RAM, while your quota only allows for less than 2CPU/2G RAM to be used, you should see those jobs running sequentially.

Though it could still make sense to track how many pending/running jobs you have in your namespace, as this could show there are currently too many jobs to run with your current quota configuration. The kube-state-metrics exporter from Prometheus would gather the metrics you need for this, you'ld find sample dashboards in Grafana, alerting rules over there.

If there's a risk some containers would start without passing proper cpu or memory resources requests / limits, you could also look into LimitRanges, forcing some defaults.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...