lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Mar 2019 22:00:46 +0000
From:   Chris Down <chris@...isdown.name>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>, Tejun Heo <tj@...nel.org>,
        Roman Gushchin <guro@...com>, Dennis Zhou <dennis@...nel.org>,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org, kernel-team@...com
Subject: Re: [PATCH REBASED] mm, memcg: Make scan aggression always exclude
 protection

Andrew Morton writes:
>Could you please provide more description of the effect this has upon 
>userspace?  Preferably in real-world cases.  What problems were being 
>observed and how does this improve things?

Sure! The previous patch's behaviour isn't so much problematic as it is just 
not as featureful as it could be.

This change doesn't change the experience for the user in the normal case too 
much. One benefit is that it replaces the (somewhat arbitrary) 100% cutoff with 
an indefinite slope, which makes it easier to ballpark a memory.low value.

As well as this, the old methodology doesn't quite apply generically to 
machines with varying amounts of physical memory. Let's say we have a top level 
cgroup, workload.slice, and another top level cgroup, system-management.slice.  
We want to roughly give 12G to system-management.slice, so on a 32GB machine we 
set memory.low to 20GB in workload.slice, and on a 64GB machine we set 
memory.low to 52GB. However, because these are relative amounts to the total 
machine size, while the amount of memory we want to generally be willing to 
yield to system.slice is absolute (12G), we end up putting more pressure on 
system.slice just because we have a larger machine and a larger workload to 
fill it, which seems fairly unintuitive. With this new behaviour, we don't end 
up with this unintended side effect.

Powered by blists - more mailing lists