Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
411 views
in Technique[技术] by (71.8m points)

azure - Managing Eviction on Kubernetes for Node.js and Puppeteer

I am currently seeing a strange issue where I have a Pod that is constantly being Evicted by Kubernetes.

My Cluster / App Information:

  • Node size: 7.5GB RAM / 2vCPU
  • Application Language: nodejs
  • Use Case: puppeteer website extraction (I have code that loads a website, then extracts an element and repeats this a couple of times per hour)
  • Running on Azure Kubernetes Service (AKS)

What I tried:

  • Check if Puppeteer is closed correctly and that I am removing any chrome instances. After adding a force killer it seems to be doing this

  • Checked kubectl get events where it is showing the lines:

8m17s       Normal    NodeHasSufficientMemory   node/node-1              Node node-1 status is now: NodeHasSufficientMemory
2m28s       Warning   EvictionThresholdMet      node/node-1              Attempting to reclaim memory
71m         Warning   FailedScheduling          pod/my-deployment     0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/memory-pressure: }, that the pod didn't tolerate, 3 node(s) didn't match node selector
  • Checked kubectl top pods where it shows it was only utilizing ~30% of the node's memory
  • Added resource limits in my kubernetes .yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-d
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: main
        image: my-image
        imagePullPolicy: Always
        resources:
          limits: 
            memory: "2Gi"

Current way of thinking:

A node has X memory total available, however from X memory only Y is actually allocatable due to reserved space. However when running os.totalmem() in node.js I am still able to see that Node is allowed to allocate the X memory.

What I am thinking here is that Node.js is allocating up to X due to its Garbage Collecting which should actually kick in at Y instead of X. However with my limit I actually expected it to see the limit instead of the K8S Node memory limit.

Question

Are there any other things I should try to resolve this? Did anyone have this before?

question from:https://stackoverflow.com/questions/65852051/managing-eviction-on-kubernetes-for-node-js-and-puppeteer

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You NodeJS app is not aware that it runs in container. It sees only the amount of memory that Linux kernel reports (which always reports the total node memory). You should make your app aware of cgroup limits, see https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8

With regard to Evictions: when you've set memory limits - did that solve your problems with evictions?

And don't trust kubectl top pods too much. It always shows data with some delay.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...