lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Apr 2022 10:51:48 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Zhaoyang Huang <huangzhaoyang@...il.com>
Cc:     Suren Baghdasaryan <surenb@...gle.com>,
        "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        "open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        cgroups mailinglist <cgroups@...r.kernel.org>,
        Ke Wang <ke.wang@...soc.com>
Subject: Re: [RFC PATCH] cgroup: introduce dynamic protection for memcg

On Mon 04-04-22 10:33:58, Zhaoyang Huang wrote:
[...]
> > One thing that I don't understand in this approach is: why memory.low
> > should depend on the system's memory pressure. It seems you want to
> > allow a process to allocate more when memory pressure is high. That is
> > very counter-intuitive to me. Could you please explain the underlying
> > logic of why this is the right thing to do, without going into
> > technical details?
> What I want to achieve is make memory.low be positive correlation with
> timing and negative to memory pressure, which means the protected
> memcg should lower its protection(via lower memcg.low) for helping
> system's memory pressure when it's high.

I have to say this is still very confusing to me. The low limit is a
protection against external (e.g. global) memory pressure. Decreasing
the protection based on the external pressure sounds like it goes right
against the purpose of the knob. I can see reasons to update protection
based on refaults or other metrics from the userspace but I still do not
see how this is a good auto-magic tuning done by the kernel.

> The concept behind is memcg's
> fault back of dropped memory is less important than system's latency
> on high memory pressure.

Can you give some specific examples?

> Please refer to my new version's test data
> for more detail.

Please note that sending new RFCs will just make the discussion spread
over several email threads which will get increasingly hard to follow.
So do not post another version until it is really clear what is the
actual semantic you are proposing.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ