lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <6c3fbc2d-85d9-4502-b43c-0950ccdd6f7e@redhat.com>
Date: Mon, 24 Jun 2024 13:32:34 -0400
From: Waiman Long <longman@...hat.com>
To: Michal Hocko <mhocko@...nel.org>,
 Roman Gushchin <roman.gushchin@...ux.dev>,
 Shakeel Butt <shakeel.butt@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Johannes Weiner <hannes@...xchg.org>, Chris Down <chris@...isdown.name>,
 Yu Zhao <yuzhao@...gle.com>, Axel Rasmussen <axelrasmussen@...gle.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
 Linux Memory Management List <linux-mm@...ck.org>,
 Rafael Aquini <aquini@...hat.com>,
 "cgroups@...r.kernel.org" <cgroups@...r.kernel.org>
Subject: MGLRU OOM problem

Hi,

We are hitting an OOM issue with our OpenShift middleware which is
based on Kubernetes. Currently, it only sets memory.max when setting
a memory limit.  OOM kills are rather frequently encountered when we
try to write a large data file that exceeds memory.max to a NFS mount
filesystem. I have bisected the problem down to commit 14aa8b2d5c2e
("mm/mglru: don't sync disk for each aging cycle").

The following command can be used to cause an OOM kill when running in a
memory cgroup with a memory.max limit of 600M on a NFS mount filesystem.

  # dd if=/dev/urandom of=/disk/2G.bin bs=32K count=65536 
status=progress iflag=fullblock

In my case, I can cause an OOM when I ran the reproducer the 2nd time in 
a test system.

In the first successful run, the reported data rate was:

   2147483648 bytes (2.1 GB, 2.0 GiB) copied, 57.5474 s, 37.3 MB/s

After reverting commit 14aa8b2d5c2e ("mm/mglru: don't sync disk for each
aging cycle"), OOM can no longer be reproduced and the new data rate was:

   2147483648 bytes (2.1 GB, 2.0 GiB) copied, 25.694 s, 83.6 MB/s

If I disabled MGLRU (echo 0 > /sys/kernel/mm/lru_gen/enabled), the data
rate was:

   2147483648 bytes (2.1 GB, 2.0 GiB) copied, 21.184 s, 101 MB/s

I know that the purpose of commit 14aa8b2d5c2e to prevent premature
aging of SSDs. However I would like to find a way to wake up the flusher
whenever the cgroup is under memory pressure and have a lot of dirty
pages, but I don't have a solid clue yet.

I am aware that there was a previous discussion about this commit in
[1], so I would like to engage the same community to see if there can
be a proper solution to this problem.

[1] https://lore.kernel.org/lkml/ZcWOh9u3uqZjNFMa@chrisdown.name/

Cheers,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ