4/3/2023 0 Comments Clean ram ubuntu![]() Now trigger the second stress test, stress -vm 1 -vm-bytes 50M stress: info: dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: FAIL: (415) <- worker 272 got signal 9 stress: WARN: (417) now reaping child worker processes stress: FAIL: (451) failed run completed in 7s stress -vm 1 -vm-bytes 100M & 271 stress: info: dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd Run the stress tool with the memory within the limit 100M first. In the meantime, monitoring the Syslog by running dmesg -Tw Let's install the stress tools on the Pod through the opened shell session. Once the pod consume more memory than the limit, cgroup will start to kill the container process. So its more clear now, Kubernetes set the memory limit through cgroup. another shell, find out the uid of the pods, kubectl get pods sh -o yaml | grep uid uid: bc001ffa-68fc-11e9-92d7-5ef9efd9374cĪt the server where the pod is running, check the cgroup settings based on the uid of the pods, cd /sys/fs/cgroup/memory/kubepods/burstable/podbc001ffa-68fc-11e9-92d7-5ef9efd9374c cat memory.limit_in_bytes 128974848 ![]() kubectl run -restart=Never -rm -it -image=ubuntu -limits='memory=123Mi' - sh If you don't see a command prompt, try pressing enter. Create a pod setting the memory limit to 123Mi, a number that can be recognized easily. ![]() ![]() Pod memory limit and cgroup memory settings I did some research to find out how these thing works. ![]() In the past few days, some of my Pods kept on crashing and OS Syslog shows the OOM killer kills the container process. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |