I am currently seeing a strange issue where I have a Pod that is constantly being Evicted
by Kubernetes.
My Cluster / App Information:
- Node size:
7.5GB RAM
/ 2vCPU
- Application Language:
nodejs
- Use Case: puppeteer website extraction (I have code that loads a website, then extracts an element and repeats this a couple of times per hour)
- Running on
Azure Kubernetes Service (AKS)
What I tried:
8m17s Normal NodeHasSufficientMemory node/node-1 Node node-1 status is now: NodeHasSufficientMemory
2m28s Warning EvictionThresholdMet node/node-1 Attempting to reclaim memory
71m Warning FailedScheduling pod/my-deployment 0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/memory-pressure: }, that the pod didn't tolerate, 3 node(s) didn't match node selector
- Checked
kubectl top pods
where it shows it was only utilizing ~30% of the node's memory
- Added resource limits in my kubernetes
.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-d
spec:
replicas: 1
template:
spec:
containers:
- name: main
image: my-image
imagePullPolicy: Always
resources:
limits:
memory: "2Gi"
Current way of thinking:
A node has X memory total available, however from X memory only Y is actually allocatable due to reserved space. However when running os.totalmem()
in node.js
I am still able to see that Node is allowed to allocate the X memory.
What I am thinking here is that Node.js is allocating up to X due to its Garbage Collecting which should actually kick in at Y instead of X. However with my limit I actually expected it to see the limit instead of the K8S Node memory limit.
Question
Are there any other things I should try to resolve this? Did anyone have this before?
question from:
https://stackoverflow.com/questions/65852051/managing-eviction-on-kubernetes-for-node-js-and-puppeteer 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…